ASET-HERDSA 2000 Main Page
[ Proceedings ] [ Abstracts ] [ ASET-HERDSA 2000 Main ]

The missing link: Developing the nexus between student feedback surveys and development for teachers

Barbara Black
The University of Western Australia
Robert Cannon
The University of Adelaide
Owen Hicks
The University of Western Australia
In most Australian universities, student-perceptions-of-teaching surveys are seen to play an important role in assisting staff to improve their teaching. It is often assumed that on receipt of their student survey report, diligent and committed academics will review and interpret their report, identify areas of weakness in their teaching and seek to address them though various means, including identifying suitable staff development activities and accessing print and non-print resources. While some members of teaching staff do manage to find the time to identify staff development workshops on teaching or access relevant books and other resources, many do not because of difficulties in identifying their specific needs as well as their perception of the effort and time involved in locating and accessing resources.

The project 'Linking Feedback from Student Perceptions of Teaching to Active Strategies for Teaching Improvement', funded by a two-year grant from the Committee for University Teaching and Staff Development, is a collaborative project between the University of Adelaide and the University of Western Australia. The project aims to provide an effective way of linking feedback on various aspects of teaching to existing resources that might assist in the development of strategies to enhance student learning through changed teaching practices. Poorly rated items in student survey reports will be linked with relevant strategies and resources through an automated system. Academics receiving feedback reports will thus be provided not only with student ratings, but also areas for development and change would be identified, and available strategies such as workshops, books, videos, CD-ROMs and Web-based resources would be listed. For the project team in each university, this involves auditing and classifying existing resources, establishing databases and linking selected student feedback items with resource databases.

This paper provides details of the project and a progress report on what has been achieved and learned in the first six months of the project.


The University of Adelaide (UA) and the University of Western Australia (UWA), like many universities nationally and internationally, have systems in place for obtaining student feedback on teaching through the use of surveys. In both institutions, survey results are used primarily to provide feedback to individual academics on their teaching with a view to them engaging in self-directed staff development for teaching improvement, rather than to enable the faculty or department to assess an individual's teaching performance. At UWA a confidential feedback service is provided by the Evaluation of Teaching Unit within Organisational and Staff Development Services (OSDS) and at UA by the Advisory Centre for University Education (ACUE); both centres which provide a range of staff development services and resources for teaching staff.

Current student survey reports are intended to enable teachers to identify areas of strength and weakness in their teaching and learning strategies. In theory they will take note of the items with low ratings in their report and then set out to improve their teaching through a variety of means, including accessing staff development opportunities and materials provided by their institutions. However, for many academics it is difficult to find the time to locate and access relevant resources or attend teaching workshops, or even to seek advice on what would best suit their development needs.

In order to address the gap between feedback on teaching and access to relevant staff development, ACUE and OSDS are carrying out a project which aims to provide teachers with timely and relevant strategies for improving their teaching. The project 'Linking Feedback from Student Perceptions of Teaching to Active Strategies for Teaching Improvement' has been funded by the Committee for University Teaching and Staff Development for 2000 and 2001. This paper describes the project and provides a work-in-progress report of the first six months of the project.


There is considerable literature supporting the need to link student evaluations to strategies for the improvement of teaching. For example, Aleamoni (1974) found that the provision of encouragement and suggestions for alternatives to improve teaching increased the impact of student evaluations. McKeachie and Lin (1975) demonstrated that the provision of a printed report was not enough. One of the three factors in staff failure to improve after student feedback was seen by McKeachie (1979) as being that even when faculty members want to improve, they may not know what to do. Other studies and reviews (Wilson, 1986; Marsh & Roche, 1993; Marsh & Roche, 1997) report similar conclusions. However, the literature shows an absence of attention to the link between specific student feedback and targeted academic development. For example, Moses, in her book Academic Staff Evaluation and Development (1988) did not attempt to link student evaluations to specific professional development for academic staff. A national conference on student evaluation was held in Sydney in 1993 at which none of the papers addressed the link between feedback on teaching and targeted strategies for improvement. Trigwell and McKenzie reported on "the range of teaching development opportunities that arise from the administration of a student feedback on teaching system" (1993, p.82). They provided many useful ideas in relation to developing teachers, a number directly derived from studies of highly rated teachers, but there was no suggestion of linking specific suggestions for development to the feedback received by individual academics. In a recent meta-analysis of research into whether evaluation of teaching leads to improvement, Murray (1997) concluded that student evaluation contributes significantly 'particularly if evaluation is supplemented by expert consultation' (p.8). However, in a recent and quite comprehensive source book on instructional consultation (Brinko & Menges 1997) no mention is made of student evaluation. Project Details Both ACUE and OSDS produce hundreds of reports on student perceptions of teaching surveys to individual academics each year. Because of the volume of reports produced and the limited number of staff available in both institutions to provide one-on-one assistance to teaching staff, an automated system is being developed to link student survey items with strategies available in each institution for teaching and learning improvement.

The project has four major components:

  1. classification of existing academic development activities and resources at each institution - this includes printed, audio-visual, CD and Web-based materials, resource kits, and teaching workshops and programmes

  2. database creation - a common database will be developed for activities and resources at each institution

  3. linking of commonly used items in student-evaluation of teaching systems to database items - OSDS's existing 'SPOT' (Student Perceptions of Teaching) item bank and ACUE's redeveloped 'SET' (Student Evaluation of Teaching) software will be linked to the common database

  4. re-programming of current report-generating software - existing programmes and software at both universities will be re-programmed to identify the three items with the lowest score in a report and to provide within the report strategies from the database to assist in improving the three areas.
The resource database developed by ACUE and OSDS will have the same structure, but the data will vary depending on what is available to teaching staff at each institution. It is anticipated that in reviewing the content of each other's databases both ACUE and OSDS are likely to improve their staff development resource collections by identifying material held by, or activities offered by, the other institution.

As each institution has a large collection of survey items (thousands), the project is limited to only items that appear frequently in student questionnaires. Only the three lowest rated items will be linked to teaching improvement strategies in order to assist the teacher to focus on areas most needing improvement. In addition, focusing only on three areas is likely to contain the size of the task so that it seems manageable and achievable in the context of full academic workload.

Each institution has a project team comprising staff with a range of skills and knowledge, including project management, student evaluation of teaching, teaching and learning, academic staff development and programming.

ACUE Project Team: Robert Cannon, Director
Helen Limburger, SET Manager
Gerry Mullins, Senior Lecturer
OSDS Project Team: Owen Hicks, Director / Barbara Black, Acting Director
Elisabeth Santhanam, Senior Research Fellow, Evaluation of Teaching Unit
Kenn Martin, Project Officer

Progress to date

Classification structure for resources and activities

At the start of the project, it quickly became apparent that beginning the project systematically with a detailed classification structure of each institution's considerable resources and development activities (beyond the existing broad classification structure), followed by an audit and re-classification of existing resources, was going to be extremely time-consuming. It was decided that it would be more efficient to start with commonly used survey items and to identify specific resources and activities for those items. This requires project team members to be very familiar with resource collection holdings and with staff development activities, as they are working within the existing broad classification structure.

It is intended that the detailed classification structure for resources will be developed on an ongoing basis by identifying keywords for each resource that is linked with an item. For example, a general text on teaching and learning in higher education may contain an excellent chapter on student participation in tutorials which could be linked with the survey item 'I have been encouraged to participate in class'. The resource collection database would have the key words 'tutorial participation' added for that particular text, and the list of keywords would grow as additional chapters or pages of the text were identified as resources for other items. In this way a detailed classification system will be built up over time as resources are identified for more and more survey items.

Identification of common survey items

The next step was to identify survey items common to both institutions' item banks so that the work carried out in either institution would be beneficial to both. Twenty items have been identified for the first few months of the project, with each institution taking responsibility for half the items in order to avoid duplication of efforts. ACUE is focusing on student-centred learning items with priority given to those mirrored in OSDS's SPOT item bank and the Course Experience Questionnaire (CEQ). OSDS is concentrating on global items and default items in the SPOT item bank that are similar to the extended standard version of ACUE's SET survey items and the CEQ. Priority has been given to items over which the teacher is likely to have some control or some impact.

Selection of resources for survey items

It was recognised that there was a need to present academics receiving survey reports with not only a limited number of areas for development (only the three lowest rated items would be linked to teaching and learning strategies), but also with a limited number of strategies for each item. Project teams agreed that providing three resources or activities for each of the three lowest-rated items would provide teachers with different options without overwhelming them with too many choices.

One of the most difficult tasks has been determining which three of the many resources and activities available would be recommended for a particular item. While strategies are intended to be relevant for all teaching staff, it was recognised that they should not be too generic. For example, encouraging students to take an active part in class may require different strategies in lectures, tutorials, clinical sessions and practical classes. It was also agreed by the project teams that the strategies should not be merely a list of teaching techniques or tips. The following has been developed as a working guide for choosing a breadth of resources and activities for each item:

While it is not proving possible to identify each of the above strategies for all items, it is useful to consider for each item with this framework in mind.

The project teams are also concerned with the quality of the strategies being provided, i.e. whether they are the 'best' of the resources and activities available. At a recent meeting of the OSDS project team, it was agreed that each item should be researched by two team members separately, who would then exchange the strategies identified and rank the top three. These would then be subject to a critical review by the ACUE team to further ensure quality. Items researched by ACUE would be reviewed by OSDS in a similar way.

Next steps

Identification of teaching and learning strategies for commonly used survey items will continue into the second half of 2000. Concurrently an interim database of resources and activities is being set up at UWA which will enable project team members to directly input data as they research survey items. Once it has been tested and is operating successfully, it will be copied to ACUE.

Careful consideration is being given to the presentation of the section of the report containing teaching and learning strategies. It is important that those receiving the revised feedback reports for the first time are not made to feel deficient in their teaching or that they scored poorly in the student surveys. The introduction to this section of the report will need to make clear to recipients that suggestions for strategies to improve their teaching are based on their three lowest scores, regardless of how high or low their mean scores are. A teacher receiving high scores for all items would still receive suggestions for improving their teaching. This approach is a reflection of the academic development focus of the project and the need for ongoing improvements to teaching.

We are also reviewing whether the provision of feedback advice should be an option that is selected by the staff member when they order an evaluation to avoid the possibility of antagonising staff with gratuitous advice.

In the next couple of months, reprogramming of report-generating software will commence with a view to an initial testing of the automated linking of selected survey items with the database. It is intended that enhanced reports with strategies for selected items will be available to a limited number of teaching staff by early next year. Feedback will be sought from SPOT and SET users in order to identify any changes required before proceeding with the provision of enhanced reports for all commonly used items later in 2001.

In 2001 profiles will be developed of the frequency of occurrence of recommended strategies and the frequency of accessing of academic development activities and resources at each university. A dissemination package will be developed towards the end of 2001.

A further work-in-progress paper will be presented at the 2001 HERDSA conference.


Aleamoni, L.M. (1974). The usefulness of student evaluations in improving college teaching. Tucson: Office of Instructional Research and Development, University of Arizona.

Brinko, K.T. and Menges, R.J. (Eds) (1997). Practically Speaking - A sourcebook for instructional consultants in Higher Education. Stillwater, Oklahoma: New Forums Press.

Marsh, H.W. and Roche, L. (1993). The use of students' evaluations and an individually structured intervention to enhance university teaching effectiveness. American Educational Research Journal, 30(1), 217-251.

Marsh, H.W. and Roche, L. (1997). Making students' evaluations of teaching effectiveness effective. American Psychologist, 52(11), 1187-1197.

McKeachie, W.J. (1979). Student ratings of faculty: A reprise. Academe, 65(6), 384-397.

McKeachie, W.J. and Lin, Y.G. (1975). Use of standard ratings in evaluation of college teaching. Final Report to National Institute of Education, Ann Arbor: Department of Psychology, University of Michigan.

Moses, I. (1988). Academic staff evaluation and development. St Lucia: University of Queensland Press.

Murray, H.G. (1997). Does evaluation of teaching lead to improvement of teaching? International Journal for Academic Development, 2(1), 8-23.

Trigwell, K. and McKenzie, J. (1995). How can highly rated teachers assist with teaching development? in Brew, A. et. al. (Eds), Judging the quality of teaching: Approaches to university-wide student feedback. Sydney: Centre for Teaching and Learning Monograph Number 1, University of Sydney.

Wilson, R.C. (1986). Improving Faculty Teaching - Effective use of student evaluations and consultants. Journal of Higher Education, 57(2), 196-211.

Contact details: Barbara Black, The University of Western Australia
Phone (08) 9380-3845 Fax (08) 9380-1156 Email

Please cite as: Black, B., Cannon, R. and Hicks, O. (2001). The missing link: Developing the nexus between student feedback surveys and development for teachers. In L. Richardson and J. Lidstone (Eds), Flexible Learning for a Flexible Society, 69-74. Proceedings of ASET-HERDSA 2000 Conference, Toowoomba, Qld, 2-5 July 2000. ASET and HERDSA.

[ Pre-conference abstract ] [ Proceedings ] [ Abstracts ] [ Program ] [ ASET-HERDSA 2000 Main ]
Created 22 Sep 2001. Last revised: 29 Mar 2003. HTML: Roger Atkinson
This URL:
Previous URL 22 Sep 2001 to 30 Sep 2002: