ASET-HERDSA 2000 Main Page
[ Proceedings ] [ Abstracts ] [ ASET-HERDSA 2000 Main ]

Evaluation goes online - What are the issues?

Ian C. Reid
Andrew Welch

University of South Australia
The increased pressures on Universities to have demonstrable accountability measures in the form of quality assurance systems, coupled with the increasing use of flexible delivery technologies, produce a new environment in which to carry out the evaluation of teaching. This paper discusses one component of the evaluation of teaching - the evaluation of teaching by students, via online methods, at the University of South Australia. The advantages of this approach are described.

The results of the use of online evaluations in a large first year Art subject are presented and are compared with 'pencil and paper' instruments previously used. Examples of issues that needed to be addressed are discussed. The benefits and risks to students, staff and the institution as a whole are considered. Finally, future developments planned are outlined.


Introduction

The issue of quality teaching, whilst always being central to universities' missions, is now taking on a new role. In the past quality was a way for lecturers to improve their own practises as part of their professional responsibilities. Now, the new pressures brought about by competition, globalisation and the changing role of government with respect to university governance are fundamentally changing the purposes of mechanisms aimed to evaluate the quality of teaching. For instance, witness the following statement by the Federal Minister of Education, David Kemp (1999a) from a website designed to provide prospective students information about university courses:
If you are about to apply for a place at university, you will realise that this is one of the most important choices that you will make. This site is designed to help prospective higher education students make informed study choices. It is one of a series of ten, one for each broad field of study. The site is a guide to Australian universities and the courses that they offer within each broad field of study. The site also gives information about the employment and study outcomes for past graduates and how they felt about their courses. This site is one important source of information. You should also seek out other information and advice so that your choice best suits you, taking into account your own study and career interests, your goals and ambitions. I wish you good luck with your future studies.

Here we have a confluence of the Internet, the description of measures of quality in terms of student outcomes such as employment, pitting universities against each other, all couched in the rhetoric of 'consumer choice' and the free market. What then does this mean for improving the quality of teaching? Where are these forces taking universities, and what will they mean for evaluative processes? How can teachers harness the technologies to improve their practises while responding to these wider influences? To attempt to begin to address these questions, we begin by briefly describing the current context, before moving to our experiences in implementing evaluations.

A cycle of evaluation and improvement based on student feedback is a fundamental component of the process of quality improvement in universities (Ramsden 1998). Whilst this fits well with a 'consumer' orientation of education, the question remains of how we define quality and how best to use information from students to improve it. Any evaluative mechanism can be shown to reflect particular constructions of 'good teaching' (Martens and Prosser, 1998). The Australian scene in this area is dominated by the Course Experience Questionnaire (CEQ), (Ramsden, 1991), and variants of it have been applied in many and various contexts (Santhanam et al, 2000). Perhaps its most powerful application is its use by the Good Universities Guide (Ashenden & Milligan, 2000) which, with its mix of student feedback on teaching quality and competitive rhetoric provides powerful reasons for universities to consider course quality within this framework. Add to this the advent of a Commonwealth Government quality agency, aiming to improve universities' 'international competitiveness', and universities are bound to respond in some way. In the words of the current Minister of Education (Kemp, 1999b):

Australia is part of a global community delivering higher education and the increased emphasis on quality assurance is a global phenomenon. We must have a national quality assurance framework that is internationally credible.
It must also be recognised that the nature of learning environments, in which students find themselves, is rapidly changing. Key to this is the concept of flexible learning. Nicholl (1998) says the 'term "flexible learning" seems to be used increasingly in relation to the kinds of reformulations of course offerings' and goes on the critically review the way government policy focuses 'on the potential usages of distance, open, mixed-mode or flexible learning strategies'. (p291) While Nicholl challenges the role of policy on the theory of teaching, the argument for allowing students choices, being able to choose when, where, how they study is clearly what 'flexible delivery' is about. Flexible delivery is about teachers making use of a variety of teaching methods and resources to enable students to learn in a way that suits them. Online materials are one method of "flexible learning" that a university can choose to offer a greater range of resources for students. The online evaluation of courses seems congruent with online offerings, especially to an increasingly information technology literate cohort of students, even for subjects taught in traditional ways.

Not only is the general learning environment changing, but in addition the profile of the student body is undergoing a transformation. Smith and Webster (1997) predict the fate of the on-campus offering in the information age:

One should remember that the new technologies may now begin to be an increasing factor in decreasing demand for residential education. The frequently evoked scenario of the 'virtual university', in which students learn from and interact with leading academics irrespective of distance, is now, in principle at least, a practical possibility, even a reality (Laurillard, 1993). This technological solution to spiralling costs in higher education will have an inevitable appeal - perhaps excessively - to policy-makers. (Smith and Webster, 1997 pp.12-13)
While Smith and Webster (1997) are critical of the pressure to get tertiary education 'online' in order to cut costs, others are more pragmatic. Martin (1999) points out that the pace of change in the information age has left teachers feeling undervalued and overworked. This has been accompanied with a 'decline in the traditional collegial ways of working' (p4), while universities need to become good at learning in order to cope with change. Martin claims that a proactive attitude to teaching is the only viable option and goes on to say:
'We can wait a long time for support and improved circumstance to come our way. Meanwhile, our lives may be dismal and depressing and we have only one life. There is much wisdom in the old adage that help comes to those who help themselves'. (p48).
Spender (1996) is even more forceful in her championing of the online environment to the point where she says that 'the dividing line between teaching and learning ceases to become useful.' (p4)

There can be no doubt that universities are feeling these pressures. They are being exerted on learning organisations around the world. The processes of evaluation are not immune from their effects. As an example of how these pressures are being embodied in evaluative processes, we now move to describe an online evaluation instrument developed at the University of South Australia. This allows students enrolled in any subject across the university to evaluate the teaching in that subject, via a survey instrument prepared online by teaching staff. Staff are able to customise the instrument for particular learning contexts or to address particular quality improvement concerns. It can be seen as a response to the social and economic climate in which universities find themselves, utilising technologically mediated methods, and providing students and staff with flexibilities in operation that are fundamental to current teaching and learning approaches (Hicks, Reid and George, 1999). We now turn to a particular instance, of the teaching in a large first-year subject in the Art discipline.

Context: Evaluation in subject 10447

Computer Imagery and Communication (10447) is an introduction to computers within the context of traditional and digital media in art, architecture, and design practice. A first year subject for all undergraduate students in the (now defunct) Faculty of Art Architecture and Design, the subject was one of several designed for 'faculty wide' delivery. The majority of students enrolled in 10447 are on-campus fulltime students, rather than students expecting to study in a distance mode and are from the Bachelor of Applied Arts, Bachelor of Architecture, Bachelor of Interior Design, Bachelor of Industrial Design, Bachelor of Visual Communication and Bachelor of Visual Arts.

The subject was taught in 1997 in face to face mode and evaluated with a paper and pencil survey. In 1998 some components of the subject were developed for online delivery and a simple web form was used to gain student responses. The evaluation instrument that is the subject of this paper was first trialled in 1999 when the subject was delivered entirely online, and in 2000 it will be used again when the delivery mode will be online, supplemented by face-to-face tutorial support. The online features were delivered initially by stand-alone web resources and progressively used the interactive features of the University's online platform, UniSAnet (Reid, 2000). A recent addition to the online tools in UniSAnet has been the online evaluation tool, TellUs. This is now described.

TellUs - the new evaluation tool

Prior to the development of the online tool, a number of factors were considered. The need for an online mechanism to carry out subject evaluations was clear to academic staff. They needed an instrument with which it was easy to create, administer and aggregate data. The online interface, supported by online access via campus-based computers and increasing access to Internet enabled computers off-campus, provided an easy to use method for students to record their responses. Database technology linked to these interfaces provided a ready and secure method for the aggregation and reporting of these responses. Further, the capacity of the UniSAnet online delivery system, linked as it is to every subject offering across the university, provided both a universal service and the capacity for the university to aggregate information across a number of subjects. This could then provide information about groups of subjects (eg a first year group of subjects, a particular stream in a course, or an entire course) and thus allow measures of course-based information from students responding to their experience in individual subjects. In this way it was thought possible to provide measures that had correlations to course-wide measures, like the CEQ which, as described above, is becoming such a powerful public measure of quality.

Based on these requirements, the TellUs evaluation tool was developed. It has the following components:

Each of these is now described.

Online construction

Academic staff are provided with a simple web-form with a 3 step process. The 3 steps are:
  1. Create questions to ask students, if they wish, or select from a bank of questions provided
  2. Set the temporal availability of the survey
  3. Build their custom survey (currently of up to 20 Likert scale questions and 2 free-text questions)
Following this process a link is automatically inserted onto either the staff member's Home Page or the Home Page of their subject. In addition a URL is provided so it can be pasted into a web page, or an email.

Figure 1

Figure 1: TellUs survey construction

Online submission of responses

Students go to the Home page for the subject, or wherever otherwise directed, and click on the link to the evaluation survey. They are informed that their responses are confidential, but that they must log on to do the survey to ensure that students cannot repeat their response and thus skew the results. The students click on the responses of their choice, type in their responses to the free-text questions, and submit their answers. The process typically takes less that five minutes.

Figure 2

Figure 2: TellUs participant survey

Online aggregation and reporting

Since all data is stored in a database that can be linked to student enrolment data, it is possible to aggregate the information for any subject in the university. Staff members wishing to aggregate the student responses to their subject's survey click on the link to the survey at their subject page, log on, and a web page is then available which provides frequency and percentage graphs and statistics of student responses for the particular subject. These graphs are drawn 'on the fly' directly from the database and can then be printed out. In addition, student comments can be downloaded. All data is time and date-stamped but confidentiality is provided by not publishing student ID data.

Figure 3

Figure 3: TellUs data analysis

Online data analysis

The above reports, graphs and comments are available for any grouping of subjects. This allows staff with a course management responsibility to aggregate data (for the common core questions) over a number of subjects. Typical examples are: Again data is constructed 'on the fly' directly from the database in an anonymous manner. It is in this area that the TellUs instrument has its particular advantage. Whereas there are a number of computer-based evaluation packages on the market, the web-based operation of TellUs provides ease of use, complemented by linkage of the data to corporate student databases, thus allowing aggregation that reflects the academic structure of the institution.

Computer Imagery and Communication (10447) and the Student Evaluation of Subjects (SES) Questionnaire

The teaching practices and methods used to collect student evaluation data over the years 1997-2000 are now described. This includes results from the implementation of the SES within the subject in 1999.

1997

The subject 10447 was first delivered in 1997 in the face to face lecture/tutorial mode. The SES was implemented in the paper and pencil version. A diverse cohort of students, large number of students (approx. 450-500) made for unwieldy collection and possibly inaccurate polling. The paper version, handwritten still did not fully address the issue of providing opportunity for anonymous feedback. (As per the University's policy in Subject evaluation and quality improvement.)

Paper and pencil method is difficult for a small number of staff with limited time to aggregate the responses to the summative questions and make more than a superficial interpretation of the formative question responses.

1998

In response to the evaluation and the staff experience in the first delivery of the subject, 10447 was developed for on-line delivery in 1998, using an on-line study guide and face to face tutorials. A custom study guide and discussion groups were designed and set up on the Flexible Learning Centre (FLC) distance education web server (ROMA). It also seemed appropriate that a subject that dealt with the way computer technology is changing the way artists and designers work, be delivered on-line, and that the students engage directly with the technology that is changing contemporary practice.

During the 1998 delivery a web form/email SES was used to evaluate the subject, using an on-line form with the student response as an email generated by the server. This addressed the anonymity issue but not the verification of the student and still required manual aggregation.

1999

Computer Imagery and Communication was delivered in semester two 1999 completely on-line, rather than a hybrid of lecture/tutorial and on-line methods. The study guide was revised using the resources of UniSAnet at the subject homepage. http://www.unisanet.unisa.edu.au/subjectinfo/subject.asp?SubjectCode=10447 Several of the drawing exercises were discarded in response to the 1988 SES as these were duplicated in another subject.

The SES for 10447 during 1999 used the instrument developed by the FLC, now called TellUs, for the first time. The possibility of easily polling the students via the instrument prompted the subject team to try the SES twice, once about halfway through the semester and again at the end. The initial SES was very successful in numbers of responses, while the second was disappointing. The main reason for the poor response to the second SES seemed to be that the students expect to do the evaluation only once, as is usual in other subjects and 'curriculum overload' at the end of semester. In response to student comments in the first of the two SESs, part way through the semester modified the online study guide format. This provided a very useful way of acting on student feedback while teaching the subject, rather than waiting for the completion of teaching. UniSAnet allows for several study guides to be developed and each made available as needed.

Overall feedback from the students from the on-line SES showed the main issues were problems with access and lack of independent learning skills (related to transition to university learning). Jones (1996) refers to Chin and notes that these issues are typical of students' perceptions at the early stage of the web-based learning environment.

The replies to the formative questions show a tension between the freedom and flexibility studying on-line, and the structure that a traditional lecture/tutorial study mode offers. For example this response from the same student asked about the best and the worst of the subject,

(best) 'I found studying at my own pace, in my own time rewarding and productive.'
And
(worst) 'I found it a challenge because there are no tutors reminding you of what should be included in assignments and when to hand them in.'
Access to the Internet was another issue identified. General Purpose student computer pools at the University of South Australia provide access to the subject home page and any links to pages that reside on the University server, however not all students had external access to the Internet. Students in this predicament used the limited library facility or had access at home or visited an Internet café. This situation has since changed with the introduction of an allocation to all University of South Australia students for Internet access from public pools.

2000

In 2000 the subject team has simplified the assessment requirements, confusing titles for the different activities and too many different tasks were identified as a problem. The team plans to reintroduce limited tutorial support especially in the first three weeks to address issues raised by the SES. These include independent learning issues and skills needed to access the University of South Australia computer and intranet and Internet resources. (Passwords, email, web browsing and participation in discussion groups)

In summary, the use of TellUs provided a feedback mechanism that was easy to administer, and also was flexible enough to use at numerous times throughout the semester. This allowed the teaching to be more responsive to students' needs, even though direct contact with students was not always possible.

Conclusions: Benefits to the teaching in 10447

Reflecting on the 10447 experience reinforces several key advantages of an online evaluation instrument, like TellUs. The use of a web-based questionnaire combined with computer 'number crunching' does not replace the informed evaluation of the data by the teacher and colleagues. However the instrument did replace the time consuming collection and aggregation of the data in order to arrive at the starting point for meaningful interpretation of the students feedback sooner.

Other issues

Using a SES either on-line or in its traditional paper form re-enforces that the SES is only one way to get feedback on the teaching in the subject. Useful feedback on the teaching in 10447 also came from:

Reflections on issues considered

Of critical importance is the ability to guarantee to students that their responses are anonymous. Whilst this is certainly possible in practice, there are a number of concerns that students can rightly have. Firstly, any electronic communication is susceptible to security breaches. The data here is not encrypted in any way so it would be technically possible for software to 'sniff' the line and gain access to the data. This is not considered, however to be a real risk. Secondly, the need to require students to log on to prevent repeated submission does open the ability to link the evaluation comments with a student record. This is carefully avoided in TellUs, but it is never the less a possibility. These possible confidentiality concerns are real, but can be minimised to an acceptable level. Indeed an online evaluation as described above is possibly more confidential that hand-written surveys. In addition, there is the possibility for students to put their name on the survey to initiate personal follow-up, if desired.

Another interesting issue of the desirability of flexible evaluation methods for flexible teaching methods. The use of flexible delivery methods and acknowledgment that individual students learn at a different pace, reinforces the need for evaluation methods that can adapt to the needs of specific students. The TellUs instrument used within the online delivery of subjects enables teachers to get feedback about their teaching materials and students' progress more quickly by automating some of the data collection processes. This increases the flexibility with which the evaluation process can be undertaken.

Future directions

Following the successful use of TellUs here and in other trials, where it has been used for both informal feedback and for summative evaluations, the University has decided to incorporate its use into a new Quality Assurance and Improvement policy http://www.unisa.edu.au/adminfo/policies/academic/a35a.htm. This policy revolves around strong links between quality, and viability of subjects on the one hand, and quality improvement mechanisms on the other. The TellUs instrument can be used within this policy to construct an official University evaluation that may take the form of core questions which are standard across the University, and optional questions which are created by individual staff. The optional questions can be selected from a bank of that reflect a range of teaching contexts (eg lecturing, studio work, online teaching, etc). The existence of core questions in the instrument allows aggregation across subjects for a range of purposes, notably to calculate a quality measure that can be factored into other decisions, as policy determines.

Whilst there may be some uneasiness about a standardised instrument being used in this way, the broader environment within which we find ourselves, dictates moves in this direction. We certainly feel there is a need to use the flexibilities provided by an online instrument to provide opportunities to evaluate in more comprehensive ways - online or not. We have found that at least the instrument described here is easy to use, reduces work on the part of the teacher, is compatible with modern teaching methods, and can also be used for quality feedback upon which teachers can act for the benefit of their students. This after all, is the point of engaging in evaluative practices in the first place!

References

Ashenden, D. & Milligan, S. (2000). Good universities guide to Australian universities and other higher education institutions. Port Melbourne: Mandarin.

Jones, D. (1996). Solving some problems of university education: A case study. Proceedings of AusWeb96, The Second World Wide Conference Gold Coast, Australia [verified 19 Oct 2001] http://ausweb.scu.edu.au/aw96/educn/jones/paper.htm

Hicks, M., Reid I. & George, R. (1999). Enhancing online teaching: Designing responsive learning environments. Paper presented at the 1999 HERDSA conference Cornerstones: What do we value in higher education? Melbourne, 12-15 July. [verified 19 Oct 2001] http://herdsa.org.au/vic/cornerstones/pdf/Hicks.PDF

Kemp, D. (1999a). [verified 19 Oct 2001] http://www.detya.gov.au/tenfields/

Kemp, D. (1999b). Quality Assured: A new Australian quality assurance framework for university education. Speech given at a Seminar on the New Quality Assurance Framework. Canberra 10 December. [verified 19 Oct 2001] http://www.detya.gov.au/ministers/kemp/dec99/ks101299.htm

Martin, E. (1999). Changing Academic Work: Developing the Learning University. Buckingham: Society for Research into Higher Education: Open University Press.

Martens, E. & Prosser, M. (1998). What constitutes high quality teaching and learning and how to assure it. Quality Assurance in Education, 6(1), 28-36.

Nicoll, K. (1998). Fixing "the facts": Flexible learning as policy invention. Higher Education Research and Development, 17(3), 291-304.

Ramsden, P. (1991). A Performance Indicator of teaching in higher education: The Course Experience Questionnaire. Studies in Higher Education, 16, 129-150.

Ramsden, P. (1998). Learning to lead in higher education. London: Routledge.

Reid, I. C. (2001). A university goes online: Avoiding throwing the innovative baby out with the strategic bath water. In L. Richardson and J. Lidstone (Eds), Flexible Learning for a Flexible Society, 574-582. Proceedings of ASET-HERDSA 2000 Conference, Toowoomba, Qld, 2-5 July 2000. ASET and HERDSA. http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/procs/reid1.html

Santhanam, E., Ballantyne, C., Mulligan, D., de la Harpe, B. and Ellis, R. (2000). Student questionnaires and teaching evaluation: Cutting the cloth to fit the purpose. In A. Herrmann and M.M. Kulski (Eds), Flexible Futures in Tertiary Teaching. Proceedings of the 9th Annual Teaching Learning Forum, 2-4 February 2000. Perth: Curtin University of Technology. http://cleo.murdoch.edu.au/confs/tlf/tlf2000/santhanam2.html

Smith, A. & Webster, F. (1997). The Postmodern University? Contested Visions of Higher Education in Society. Buckingham: Society for Research into Higher Education: Open University Press.

Spender, D. (1996). Creativity and the computer education industry. [verified 19 Oct 2001] http://www.acs.org.au/ifip96/dales.html

Van Dusen, G. (1997). The Virtual Campus: Technology and Reform in Higher Education. ASHE-ERIC Higher Education Report, 25(5). Washington DC: The George Washington Graduate School of Education and Human Development.

Acknowledgment

We would like to acknowledge the expert programming work done by Mr Quang-Dat Pham, of the UniSAnet team, in the development of TellUs.

Contact details: Ian Reid, Coordinator Online Services, Flexible Learning Centre, University of South Australia. Telephone (08) 8302 7074 Fax (08) 8302 6363 Email ian.reid@unisa.edu.au

Please cite as: Reid, I. C. and Welch, A. (2001). Evaluation goes online - What are the issues? In L. Richardson and J. Lidstone (Eds), Flexible Learning for a Flexible Society, 583-594. Proceedings of ASET-HERDSA 2000 Conference, Toowoomba, Qld, 2-5 July 2000. ASET and HERDSA. http://www.aset.org.au/confs/aset-herdsa2000/procs/reid2.html


[ Pre-conference abstract ] [ Proceedings ] [ Abstracts ] [ Search ] [ Program ] [ ASET-HERDSA 2000 Main ]
Created 17 Oct 2001. Last revised: 31 Mar 2003. HTML: Roger Atkinson
This URL: http://www.aset.org.au/confs/aset-herdsa2000/procs/reid2.html
Previous URL 17 Oct 2001 to 30 Sep 2002: http://cleo.murdoch.edu.au/gen/aset/confs/aset-herdsa2000/procs/reid2.html