Professor Herb Marsh, University of Oxford

Herb Marsh is Professor in Educational Studies at Oxford University. Prior to his appointment at Oxford he was Research Professor of Educational Psychology at the University of Western Sydney where he served as Dean of Graduate Research Studies (1996–2000) and Pro-Vice-Chancellor (1995–96, 1997). His substantive interests span self-concept and self-esteem, achievement motivation, evaluation of teaching, peer review, and student achievement. His methodological interests are multilevel modelling, longitudinal modelling, meta-analysis, and construct validity. He is widely published with 350 articles in more than 70 different journals, 60 chapters, 14 monographs, and 350 conference papers; and co-edits the International Advances in Self Research monograph series.

Professor Marsh's research has consistently attracted external funding, including success on 24 proposals to the Australian Research Council during the last 25 years as well as more recent United Kingdom grants from the Economic and Social Research Council, Higher Education Funding Council for England, and Higher Education Authority. In 2008 Professor Marsh was awarded the ESRC Professorial Fellowship which provides professorial salary, support staff and infrastructure for an extended research program, a highly competitive fellowship awarded to only 3–5 social science researchers across all of the UK.


Professor Marsh will be holding two public lectures and a workshop in September 2009:


Student evaluations of university teaching –

recommendations for policy and practice

For:

Public lecture

Date:

Tuesday, September 1

Time:

5–6pm

Venue:

Education Lecture Theatre 424, Education Building A35

RSVP:

Not necessary


Students' evaluations of teaching effectiveness (SETs) have been the topic of considerable interest and a great deal of research in universities all over the world. Although SETs have a solid research base stemming largely from research conducted in the 1980s, it is surprising that research conducted in the past decade has not done more to address critical limitations previously identified and to incorporate exciting methodological advances that are relevant to SET research.

Perhaps the most damning observation is that most of the emphasis on the use of SETs is for personnel decisions rather than on improving teaching effectiveness. Although much work is needed on how best to improve teaching effectiveness, it is clear that relatively inexpensive, unobtrusive interventions based on SETs can make a substantial difference in teaching effectiveness. This is not surprising, given that university teachers typically are given little or no specialised training on how to be good teachers and apparently do not know how to fully utilise SET feedback without outside assistance.

Why do universities continue to collect and disseminate potentially demoralising feedback to academics without more fully implementing programs to improve teaching effectiveness? Why is there not more SET research on how to enhance the usefulness of SETs as part of a program to improve university teaching? Why have there been so few intervention studies that address the problems identified in reviews of this research conducted a decade ago? These, and other issues, are addressed in this public lecture.


Workshop: methods and approaches to meta-analysis

For:

Public event

Date:

Friday, September 18

Time:

9am–12.30pm

Venue:

Lecture Theatre 424, Education Building A35

RSVP:

Dr Gregory Liem (g.liem@usyd.edu.au) by Wednesday, September 16


Meta-analysis is a statistical method for synthesising previous research on a particular topic, and identifying trends across that research. All relevant studies on a particular topic are systematically reviewed; results for each study are translated into a common effect size metric that can be compared across different studies. Through a synergistic combination of quantitative (i.e., representing study outcomes using a standardised metric) and qualitative methods (i.e., classification of study characteristics as moderator variables), meta-analysis is uniquely placed to inform both theory and application using a holistic approach to research. The key aims of this workshop are to: inform researchers about what meta-analysis is and how it should be used; help researchers understand the distinctions between the different ways of conducting meta-analysis; identify the appropriateness of each method to certain research problems; provide research methodology instructors with materials to develop an introductory lecture on meta-analysis; and better enable participants to conduct meta-analyses of their own.


Improving the peer-review process for ARC grant application –

reliability, validity, bias and generalisability

For:

Public lecture

Date:

Monday, September 28

Time:

1–2pm

Venue:

Lecture Theatre 101, Law Building

RSVP:

Not necessary


Peer review is a gatekeeper, the final arbiter of what is valued in academia, but it has been criticised in relation to traditional research criteria of reliability, validity, generalisability, and potential biases. Despite considerable literature, there is surprisingly little sound peer-review research examining these criteria or strategies for improving the process. This presentation summarises a research program based on data from the Australian Research Council (10,023 reviews by 6233 external assessors of 2331 proposals from all disciplines). It is comprehensive due to its size, but also because it includes peer reviews from all science, social science, and humanity disciplines, from assessors from all over the world.

Some of the assessors whose decisions were included in the evaluation has been chosen by the applicants themselves. Others were nominated by funding-body panels. Using multilevel models, we critically evaluated peer reviews of grant applications and potential biases associated with applicants, assessors, and their interaction (for example: age, gender, university, academic rank, research team composition, nationality and experience). Peer reviews lacked reliability, but the only major systematic bias found involved the inflated, unreliable, and invalid ratings of assessors nominated by the applicants themselves.

We propose a new approach, the reader system, which was evaluated with psychology and education ARC grant proposals and found to be substantially more reliable and strategically advantageous than traditional peer reviews of grant applications.