Evaluating the data
ERA is a quality-based evaluation excercise, and therefore quantity of outputs mattered little.
The main focus in the evaluation was Indicators of Research Quality. For disciplines with citation analysis, this was primarily the proportion of articles in highly ranked journals, and the citation data on those.
For disciplines not subject to citation analysis, the main indicator of quality was the peer review of selected outputs. While the quality indicators were the most important, other indicators were also incorporated:
- Indicators of research volume and activity
Research volume and activity is considered on the basis of total research outputs and research income within the context of the eligible researcher profile.
- Indicators of research application
Applied research is considered on the basis of research commercialisation income and other applied measures.
- Indicators of recognition
Research recognition is considered on the basis of a range of esteem indicators.
Research Evaluation Committees (REC)
There is a minimum threshold for analysis of a 4-digit FoR group, and if this is not reached, the data will be aggregated upwards, and evaluated at a 2-digit level. If there is insufficient data for analysis in a 2-digit FoR group, evaluation will not take place for that 2-digit FoR code for that institution. The minimum threshold for a Unit of Evaluation is 30 outputs for a group which is not being citation analysed, and 50 indexed articles in groups using citation analysis.
For national reporting for a discipline, evaluations will be undertaken of disciplines aggregated across institutions at the two-digit and four-digit levels regardless of the volume of research at those levels within individual institutions. This information will not be identifiable at an institution level.
The evaluation of the ERA data is carried out by Research Evaluation Committees (RECs), comprising experienced, internationally recognised discipline experts selected by the ARC. There was one REC for each cluster.
After the ERA submission closed, the ARC assigned material to appropriate reviewers, who submitted a preliminary evaluation of the material. These preliminary evaluations, along with all the relevant ERA indicators, were then compiled into reports for the RECs to consider. Each REC then met and agreed on a final evaluation, which they reported to the ARC.
Minimum threshold for evaluation
In order that the ERA data be statistically significant and meaningful, there were minimum volumes of research output that had to be reached before a FoR code was evaluated for ERA.
If the threshold was not reached at the 4-digit level, that code did not receive a score and instead all the data contained in that code were rolled up to the 2-digit level.
If there were insufficient outputs even at the 2-digit level, then that entire code was not evaluated.
The threshold was 30 outputs in FoR codes that used peer review, and 50 indexed journal articles in FoR codes with citation analysis (indexed means the article was indexed in Scopus).
Part of the evaluation of research quality came from citation analysis of research outputs. Citation analysis is only be used for those research disciplines for which it is a useful indicator of quality. For ERA 2010, citation analysis was used for:
- Cluster 1 (physical, chemical and earth sciences);
- Cluster 3 (engineering and environmental sciences);
- Cluster 5 (mathematical, information and computing sciences);
- Cluster 6 (biological and biotechnological sciences);
- Cluster 7 (biomedical and clinical health sciences); and
- Cluster 8 (public and allied health sciences); as well as for
- FoR group 17 (psychology and cognitive sciences).
Scopus was used as the provider of citation data, and therefore only articles indexed in Scopus were eligible for citation analysis. Articles not indexed in Scopus were still submitted for inclusion in ERA, since citation analysis is only one indicator used in the evaluation.
Year-specific Australian and international citation benchmarks (PDF) were developed by the ARC for each 4-digit FoR code, and the relative citation impacts (RCI); and centile distributions were calculated.
- Relative citation impact: if the average citation of articles published in 2006 in a given FoR code is 10, and a particular 2006 article in that FoR code had been cited 20 times, it’s RCI is 2.
- Centile distribution: the number of articles in the top 1%, 5%, 10% most cited articles in that FoR code in the world in that FoR group.
To assess the quality of research outputs not suited to citation analysis (including non-traditional research outputs), peer review was used on a selection of outputs in:
- Cluster 2 (humanities and creative arts); and
- Cluster 4 (social, behavioural and economic sciences); as well as for
- FoR group 0101 (pure mathematics).
For each 4-digit FoR group, 20% of outputs were selected for peer review by us and the RECs then selected a random sample of these for peer review.
The selection chosen for peer review was not limited by researcher, for example all outputs could have come from one eligible researcher or they may have come from a representative sample across the whole discipline. At no stage will the selection be made public.
To support the peer review of creative works research outputs, a statement identifying the research component of each output had to be written and made available to the peer reviewer.