Reading the Factor Scale Report

This summary format reports the CEQ Factor Scale information and the Overall Satisfaction item. The CEQ factor scales reflect five key aspects of the student learning experience tapped by the questionnaire items. The item responses are combined and reported in terms of the proportions of graduates who agreed or disagreed that their experience of their course was educationally positive on each of these five factors. There is a considerable body of research linking students' perceptions of these key aspects of their course experience with the adoption by students of different approaches to learning, leading to variations in the quality of student learning outcomes.


(Source: Ainley & Johnson (2000), The Course Experience Questionnaire 2000 Interim Report, ACER).

The Course Experience Questionnaire asks graduates to indicate the extent to which they agree, or disagree, with 25 statements, using a five-point scale where '1' represents strong disagreement and '5' indicates strong agreement. The intervening points on the scale (2,3,4) do not have value anchors. Some CEQ items are phrased negatively; for example "The workload was too heavy". The scoring on these items is reversed prior to combining the item data with that of other items to calculate the factor scale score. In the development of the survey, analysis of the item responses revealed that the CEQ items relate to five aspects of students' experience of their courses:

Good Teaching Scale (GTS) - six items

3. The teaching staff of this course motivated me to do my best work.
7. The staff put a lot of time into commenting on my work.
15. The staff made a real effort to understand difficulties I might be having with my work
17. The teaching staff normally gave me helpful feedback on how I was going.
18. My lecturers were extremely good at explaining things.
20. The teaching staff worked hard to make their subjects interesting.

The Good Teaching Scale is characterised by practices such as providing students with feedback on their progress, explaining things, making the course interesting, motivating students, and understanding students' problems. There is a body of research linking these practices to learning outcomes. High scores on the Good Teaching Scale are associated with the perception that these practices are present. Lower scores reflect a perception that these practices occur less frequently.

Clear Goals and Standards Scale (CGS) - four items

1. It was always easy to know the standard of work expected.
6. I usually had a clear idea of where I was going and what was expected of me in this course.
13. It was often hard to discover what was expected of me in this course.
r24. The staff made it clear right from the start what they expected from students.
(r= item scoring reversed to allow for negative phrasing)

Even though the establishment of clear goals and standards in a course could be considered part of good teaching in a broader sense, it would be possible to utilise the practices encompassed by the Good Teaching Scale but fail to establish clear goals for the course and clear expectations of the standard of work required from students.

The Appropriate Assessment Scale (AAS) - three items

8.rTo do well in this course all you really needed was a good memory.
12.rThe staff seemed more interested in testing what I had memorised than what I had understood.
19.rToo many staff asked me questions just about facts.
(r= item scoring reversed to allow for negative phrasing)

This scale concentrates on one particular aspect of assessment and is not exhaustive in its measurement of assessment approaches. It focuses on the extent to which assessment emphasised recall of factual information rather than higher order thinking. Embedded in the Appropriate Assessment Scale is the assumption that assessment which does not focus on factual recall concentrates instead on higher order processes.

There is one additional item about assessment, which is not used in analysis. Item 16: 'The assessment methods employed in this course required an in-depth understanding of the course content'. This is a new item being piloted to replace an item which did not load unambiguously on any single scale.

The Appropriate Workload Scale (AWS) - four items

4.rThe workload was too heavy.
14.I was generally given enough time to understand the things I had to learn.
21.rThere was a lot of pressure on me to do well in this course.
23.rThe sheer volume of work to be got through in this course meant it couldn't all be thoroughly comprehended.
(r= item scoring reversed to allow for negative phrasing)

High scores on the Appropriate Workload Scale indicate reasonable workloads (NB reversed items). These are graduates who disagree with the proposition that The workload was too heavy and who agree that I was generally given enough time to understand the things I had to learn. The evidence from research on student learning is that heavy workloads require students to adopt an approach to learning which emphasises skimming across the surface of topics, without being able to spend the time to truly engage and understand the material they are meant to be learning.

The Generic Skills Scale (GSS) - six items

2.The course developed my problem-solving skills.
5.The course sharpened my analytic skills.
9.The course helped me develop my ability to work as a team member.
10.As a result of my course, I feel confident about tackling unfamiliar problems.
11.The course improved my skills in written communication.
22.My course helped me to develop the ability to plan my own work.

The Generic Skills Scale is an attempt to take into account the extent to which university courses add to the generic skills that their graduates might be expected to possess. Discipline-specific skills and knowledge are often crucial to prospects for employment and further study. Nevertheless, the emphasis on generic skills stems from the belief that knowledge quickly becomes obsolete, and generic skills that may have been acquired in the learning process should endure and be applicable in a broader context. Skills typically identified in this context include communication skills, the capacity to learn new skills and procedures, the capacity to make decisions and solve problems, the ability to apply knowledge to the workplace, and the capacity to work with minimum supervision.

The Overall Satisfaction Item (OSI)

25.Overall, I was satisfied with the quality of this course.

This single item asks graduates about their overall level of satisfaction with their degree course.


'Agreement' combines item responses 4 and 5 (agree and strongly agree), and 'Disagreement' combines responses 1 and 2 (strongly disagree and disagree). The midpoint response position 3 is reported as 'Neutral'.

For improvement purposes, these proportions usually provide a clearer indication of how the cohort of students has experienced a particular aspect of the course, than is provided by the "mean" (ie - 3.5). For example, data indicating that 50% of the students in a course do not feel they clearly understand the goals of this course might provide a clearer indication of a need to address this issue, than the same data presented as a figure of +2 being the mean for this scale.

'Broad agreement' is also a term that is sometimes used. Broad agreement is obtained by aggregating the item responses for ratings of 3, 4 and 5 (neutral, agree, and strongly agree) into one category.


University of Sydney respondents are assigned to faculties on the basis of the field of study they write on their survey. The total number of graduates from all fields of study assigned to the faculty, who returned useable surveys, is reported as the (N=) figure in the top right hand corner of the report.

Graduates indicate their Field of Study on their CEQ survey (not their degree course). Fields of Study are allocated to faculties based on the Field of Study Mapping provided by the University's Planning Support Office. More information on how the University maps Field of Study to different faculties is available in the FAQ link from the home page.

The FREQUENTLY ASKED QUESTIONS section (available from the home page) includes answers to the following questions...

  • Who can I talk to about the faculty's CEQ results?
  • How are the CEQ "Field of Study" categories matched to faculties?
  • What proportion of Sydney graduates return CEQ surveys?
  • When are the graduates surveyed?
  • Why Does the QA Group report CEQ data?
  • What is the difference between the CEQ and the SCEQ?