Interpreting the ERA results

Some points to remember when interpreting the ERA 2010 results.

Sensitive to size and focus of the institution

Much of the ERA evaluation looks at averages across FoR groups. As such, a large field containing a hub of research excellence can therefore be diluted by a tail of average research activity.

Similarly a very strong sub-field (say, a 6-digit FoR group) can be diluted by weaker sub-fields (other 6-digit FoR groups) contributing to the one field (the 4-digit FoR group). The same situation can arise between 4-digit FoR groups contributing to one 2-digit FoR group.

Cluster diagram

The result of this is that a field or even an institution that is large and comprehensive risks having its hubs of excellence diluted to the point where they’re no longer apparent.

FoR codes do not equal faculties

Every single faculty contributed to multiple 2-digit FoR groups and every single 2-digit FoR group was contributed to by multiple faculties.

FoR codes do therefore not correspond to faculties or schools, even if there appears to be a clear correlation (e.g. Faculty of Pharmacy with FoR 1115 Pharmacology and pharmaceutical sciences).

It is therefore impossible to say that a result in an FOR code ‘belongs’ to a faculty or school.

Interdisciplinarity

There is no scope for interdisciplinarity to be captured or evaluated in ERA. If an item was assigned multiple FoR codes (e.g. a paper on a chemotherapy drug may be assigned an oncology FoR code and a medicinal chemistry FoR code) it was evaluated completely separately as half an item in each code.

ERA is backward looking

ERA 2010 evaluated research published as early as 2003, meaning that the research was actually carried out ten years or even longer ago, due to the inevitable delay in publishing the research.

Further, since the ERA census date was 31 March 2009, any appointments made after this time were completely excluded from ERA.

It should therefore be remembered that ERA gives us an indication of our past research performance, but not our current, and certainly not our future research performance.

ERA backward graphic

Ranked journal and conference list

The ranked journal and conference list was very contentious, and generated significant disagreements about both the rankings and the FoR codes assigned to the journals from experts in the various fields.

The ranking disagreements varied from a simple dispute about the quality of a given journal relative to another, but were also subject to disagreements about whole fields from different schools of thought.

Certain journals are also perceived as higher quality by particular sub-disciplines who publish in that journal (e.g. a medical journal perceived by the medical research community as average might be considered a very high quality publication by the physiotherapy research community).

The rankings also ignore that at times a more specialist or local journal may be the most appropriate vehicle for a particular piece of research.

The FoR codes assigned to journals by the ARC were also contentious, and led to research being at times incorrectly classified, and therefore measured against unsuitable benchmarks, as well as contributing to the score of an inappropriate field.

Though most of the journals had FoR codes assigned that were reasonable, there were many instances of FoR codes being far too limited and at times completely wrong.

The final assignments of both rankings and FoR codes by the ARC was not transparent, and it appears that at times the choice of code and rank simply came down to the final group of experts to contribute to the development.