Tasmania's astronomers, I knew were well regarded, and with a 4 ('above world standard'), did quite well, as I expected. Or did they?
I was surprised to learn that a rating of 4 in fact placed them below the national average. Astronomy achieved a quite remarkable national average score of 4.2. Of thirteen institutions rated, six were rated at 5 ('well above world standard') and five at 4. Whereas political science, like most other social sciences seemed to have the kind of bell curve one would expect, with a distribution about a means that (surprisingly to me, a chair of an IPSA Research Committee) was below world standard, astronomy seemed to be, with only two exceptions, better than world standard. How could this be? The answer throws into question the whole ERA exercise as a means of comparing disciplines, because the result for astronomy seems to be largely an artefact of the methodology employed.
Part of the problem seems to be the peculiarly Australian penchant for emphasising research income (an input measure) for assessing research quality (an output measure). There is a strong case for using multiple indicators in assessing research performance. To assess the quality of research papers according to the quality of the journals in which they appear would appear to come dangerously close to committing the ecological fallacy, so other measure are valuable, but there seems to be an absence in the international literature of reports using research income as an indicator of either research 'performance' or 'quality' (Martin, 1996). There seems to a singular fixation in Australia with this measure, which commits the fallacy, well known to students of policy analysis, of confusing an input measure with outputs. Research income might well be important in providing research infrastructure funding to support research, but if we are interested in performance in terms of either effectiveness or efficiency we must focus on outputs and their relationship to inputs. We certainly would not regard a car as excellent simply because it cost a lot to buy and to operate.
Advertisement
Indeed, using research income introduces an acknowledged bias that is clearly demonstrated in astronomy. Large telescopes are expensive instruments, and research quality (unsurprisingly) is highly correlated with telescope size. Martin (1996: 351) emphasizes that size-adjusted indicators are vital if smaller research units are to be compared fairly with larger ones, yet most scientometric studies rely solely on size-dependent indicators such as publication or citation totals, unadjusted even for number of staff or income. The basic point that large budgets allow more researchers to be employed, and more researchers tend to produce more papers seems to have been lost sight of. The ERA rewards large budgets and fails to adjust for size. Its results therefore inevitably conflate size and quality to some extent.
While multiple indicators are recommended, those most often suggested are things like numbers of publications or citations, peer evaluation or estimates of the quality, importance or impact of publications (perhaps assessed by peer review). Nobody seems to suggest counting inputs.
There are also acknowledged biases in any attempt to assess quality in astronomical research. One is a language bias: a requirement to publish in English advantages anglophones, tend not to read or cite papers in other languages, and citation databases provide uneven coverage of foreign language journals (Sánchez, and Benn, 2004: 445). Another bias stems from the tendency of each community to over-cite its own results, and papers from large countries receive more citations than those from small countries. These biases are thought to favour citation of papers from the large North American and UK astronomy communities, but some also favour Australian researchers.
One possible reason for astronomy doing so well in a research quality assessment is that they have long experience with the task, with published research on measuring performance going back more than 25 years (Martin and Irvine, 1983).Certainly, astronomers in other countries seems to be particularly adept at being able to demonstrate their claims to research pre-eminence. While one might gain the impression from the ERA that Australian astronomy must lead the world, this claim would be disputed by the Canadians, who consider that they themselves occupy that position.
One of their number, Dennis Crabtree (2009:1) recently claimed that
Their [Science Citation Index] August, 2005 report on Science in Canada, which covered papers published in a ten-year plus ten month period, January 1994 - October 31, 2004, showed that Canada ranked #1 in the world in average citations per paper in the "Space Science" field. An examination of the journals included in the Space Science field shows that the field is dominated by astronomy.
Advertisement
Perhaps Australia overtook Canada by the time the ERA took place? No -
Crabtree (2009: 2) thinks not:
Canadian astronomy's excellence on the world stage continues. ScienceWatch's report on Science in Canada from May 31, 2009 indicates that of all science fields, astronomy had the highest impact relative to the world. Canadian astronomy papers published between 2004 and 2008 were cited 44% above the world average. For comparison, astronomy papers from the UK and France, for a similar period, were cited 41% and 21% above the world average.
A complete version of this paper, complete with references can be downloaded by clicking here.