Full Paper |
Educational Software: Criteria for Evaluation
H Geissinger
Instructional Media & Design P/L
Box 16, 47 Frenchmanís Road
Randwick NSW Australia 2031
imed@ozemail.com.au
Abstract
This paper addresses concerns raised by academics, instructional designers and faculty administration about the teaching/learning software delivered via Information Technologies and used in Australian universities. The paper looks at issues raised by Bain and McNaught in their 1996 paper on the use of computer software especially in relation to identified academicsí teaching methods. A number of criteria are presented and discussed, both for formative and summative evaluations. The importance of culture (the culture in which a software program is developed and the cultural values brought to it by users) is discussed as is the aspect of social evaluation of computer media. Researchers have discovered that formative evaluation, especially, is poorly understand by neophyte developers. This paper attempts to provide reasons why it is important, both in terms of cost-effectiveness and the vale of the software to teaching/learning processes.
Introduction
Educational software has evolved over the past 40 years from Kellerís Personalized System of Instruction to cognitively-stimulating microworlds and magical interactive multimedia experiences. Our views of the ways in which people learn effectively have changed with the evolution of media. We now understand that learners operate on many levels, some of which are subconsciously chosen, and that they (who are also ourselves) are influenced by both intrinsic and extrinsic factors. Educators no longer think that linear stepwise instructions can supply the learning needs of even the most naive learner or that the learning context can be rigidly controlled. It seems well understood that learners bring a variety of skills, knowledge and responses to every learning situation and that the teacher must supply a number of ways for them to approach new learning, engage with it and absorb it. This variety of approaches applies to any teaching/learning situation but recently, the focus of attention has centred on computerised educational media and the ways in which they shape the learning process.
There are many computer-delivered methods and media. The old Keller Plan methods are still used, even though the medium may dress them up with interesting visuals and instantaneous feedback to input material. There are the multimedia ëexplorationsí which offer learners a chance to see various representations of situations and events and possibly to investigate the backgrounds and contexts for themselves. There are the many CD-ROMs which offer extensive access to databases and simulations. And, as well as games and simulations on the Web or local network, there are active chat facilities (for example, Internet Relay Chat software or IRCs) which encourage learners to talk to each other and to the lecturer about the learning experience.
Although both the software and the hardware available to educators have become extremely versatile and sophisticated over the last 40 years, the methods of using these for educational purposes fall into only a few categories. Bain and McNaught have consolidated their observations of teachersí approaches to student learning into three types: reproductive/transmitting, explanatory, and transformative. They suggest that each of these approaches has its place, in that reproductive methods are traditional and are expected by certain groups of students, that explanatory methods are helpful to many students who hold misconceived ideas about basic concepts, and conversational/transformative methods most closely approach a facilitative, supportive mental interface at which knowledge can be constructed and understood. These ways of thinking about student learning are characterised in the teaching designs which underlie the various types of software programs which are now so prevalent in Australiaís universities. Because there appear to be so few actual categories, it is important that an academic identify his/her teaching approach and views of student learning, in order to assist in his/her choice of educational software. Then, for example, when the academic compiles a teaching portfolio or sits on a software-evaluation committee, that person will know how s/he views the teaching/learning process and the outcomes s/he would expect from the use of certain software.
Both the software and the medium through which it is delivered should be evaluated. Often, the educational institution makes the decision about the hardware prior to departmental or individual decisions about media so that constraints on possible software choices already pertain. Besides evaluations for educational purposes, software to be delivered on a particular hardware system must be tested for its technical robustness -- a very necessary step especially where a networked system or a Web site are involved.
Software Evaluation
Evaluation of teaching/learning software consists of two types: formative and summative. Formative methods of evaluation are used when a projectís outline has been decided and work has begun on the design and development of the various parts. It can be deliberate and consist of a series of methods to determine whether the project can work as planned or it can be so ad hoc that it consists mainly of obtaining the opinions of passers-by as to the visual effectiveness of a series of screens. As the first researchers of software development methods under the former CAUT (Committee for the Advancement of University Teaching)-funded grants, Hayden and Speedy (1995) found that, although formative evaluation was a project requirement, many grantees either paid lip service or simply ran out of time before they could implement it. These authors suggest that the grantees did not understand the main purpose of such evaluation and so considered it an ëadd-oní. Alexander and Hedberg, in noting the level of academic effort which goes into developing educational software, state
Given the high expectations of technology to provide more cost-effective learning and to improve the quality of the learning, together with the need to gain recognition for academics undertaking such development projects, the time has come for a re-examination of the role of evaluation in the development and implementation cycles. (Alexander & Hedberg, 1994:234)
Yet Moses and Johnson, in their review of CAUTís National Teaching Development Grants, found that ë...some projects were funded despite [added italics] the proponentsí lack of expertise in evaluation and of knowledge about learning theories and practicesí (1995:36). Thus, although formative evaluations should occur during the process of developing a teaching program, whether it consists entirely of software or also has other components, it appears that many projects reach completion without the benefit of data which have the potential to inform and improve them.
Northrup, writing about formative evaluation for multimedia, states that it is ë...an ongoing process conducted along every step of program developmentí (1995:24) and finds that, if a first draft or version of a product is created before a formative evaluation is conducted, then major modifications will not occur even when they appear to be required. Too much money, effort and time will have gone into the product to allow a major rework to take place. To help prevent this unfortunate situation, she offers guidelines for the development team which include the need for all the stakeholders to be involved and support for, and enforcement of, formative evaluation at all stages. She also discusses how data can be collected and used. The only aspect Northrup does not address is the recognition of students or other potential users as stakeholders. However, Biraimah (1993), Barker & King (1993), Reiser & Kegelmann (1994) and Henderson (1996) all agree that learners are stakeholders and that they should help carry out the formative evaluation in a number of ways, even if they function mainly to check for biases of gender and race or to see if the program will actually load. Indeed, Reiser and Kegelmann (1994:64) note that student evaluation of software is necessarily subjective and should be supplemented by that of subject matter experts, media specialists and administrators.
In comparison, summative evaluations can be much wider in scope. They occur when the finished product is examined and can benefit from hindsight. Thorpe, an open learning specialist, defines evaluation as ë...the collection, analysis and interpretation of information about any aspect of a programme of education and training, as part of a recognised process of judging its effectiveness, its efficiency and any other outcomes it may have (1988:5). She notes that a number of characteristics go with this definition, such as inclusiveness, the search for both intended and unintended effects and the capability of the activity to be made public. She emphasises that evaluation is not synonymous with assessment.
Teaching approach
Although both types of evaluation are important, and should be conducted at appropriate times throughout the life cycle of any educational program, they are less effective for the stated purpose when they occur in isolation from the evaluatorís teaching philosophy and preferred methods. Although some software examples may be considered to be more ësophisticatedí than others because, for example, the screens are more visually interesting or require more student input, those programs may actually be examples of the reproductive/transmitting method of teaching.
Bain and McNaught examined the ways in which academic faculty view student learning. They suggest that academics hold certain views on the ways in which students learn and therefore tend to adopt one of the following teaching approaches:
All three of these approaches can be found in educational software. For example, the reproducing/transmitting/expository conception can be found in software which provides drill-and-practice or the short explanation, selection of readings and student input-to-exercises model used in many Web-based subjects or in some examples of electronic books or simulations. The pre-emptive orientation, in which the academic knows much about the learning difficulties past students have exhibited, can be found in interactive multimedia as well as in games, simulations and problem-solving courseware. The conversational approach may be found in multimedia exploration-of-a-microworld examples and in simulations and games where students interact with both software and people to construct knowledge and receive feedback on their thinking.
Barker states that
People design ëlearning productsí in order to meet some perceived learning or training need. We therefore define ëlearning designí as the overall effects of the cognitive activity that takes place within the...design team during the conception and formulation of a learning product... produced to meet some pre-defined pedagogic requirement. (1995:87)
Therefore, it is hardly surprising that some software is only treasured by its developers. When its advocates leave teaching, the product is no longer used.
Approaches to evaluation
A simple question for any educational software should be, ìCan this product actually teach what it is supposed to?î It is a simple question to ask, but often is difficult to answer because the product may have so many beguiling features. It requires the evaluator to recognise his/her own view of the ways in which students learn, to relate that view to the learning objectives of that portion of the course and to determine how and whether those objectives are carried out in the software.
Table 1
Category |
Discussion |
Quality of end-user interface design |
Investigation shows that the designers of the most highly-rated products follow well-established rules & guidelines. This aspect of design affects usersí perception of the product, what they can do with it and how completely it engages them. |
Engagement |
Appropriate use of audio & moving video segments can contribute greatly to usersí motivation to work with the medium. |
Interactivity |
Usersí involvement in participatory tasks helped make the product meaningful and provoke thought. |
Tailorability |
Products which allow users to configure them and change them to meet particular individual needs contribute well to the quality of the educational experience. |
Excerpted from Barker & King (1993) p309.
The technical approach to evaluation used to be very important. For example, many papers were written in the 1980s about the importance of ëdebuggingí software and ensuring it would run as intended. Students were said to be frustrated with technical problems and to complain that these interfered with their learning. Technical evaluations of software are still of significance even though students of the 1990s are accustomed to computer crashes and often know how to address them. Technical difficulties often arise with authoring products produced in an educational organisation. With some echoes of the well-known MicroSift courseware proformas, Squires and McDougall (1994) provide a helpful series of lists for technical evaluations of software.
Barker and King (1993) have developed a method for evaluating interactive multimedia courseware. They provide four factors which their research suggests are of key importance to successful products. They state that several other factors should be considered as well, although their importance is seen as somewhat less than the four listed in Table 1. These secondary factors are: appropriateness of multimedia mix, mode and style of interaction, quality of interaction, user learning styles, monitoring and assessment techniques, built-in intelligence, adequacy of ancillary learning support tools and suitability for single user/group/distributed use (Barker & King, 1993:309).
Although Barker and Kingís factors do make substantial contributions to the ëlook and feelí of successful products, some of them need more explanation. For example, the ëmode and style of interactioní impacts on how a user navigates through the product. Difficulty in choosing an appropriate navigation method may arise if the designer and the academic hold different views of the ways in which users will learn from the software.
Young (1996) points out that, if students are allowed to control the sequence and content of the instruction, they must acquire self-regulated learning strategies for the instructional experience to be successful. Youngís research concerned students in seventh grade -- a learning stage which might be construed as ënaiveí and at which students may hold beliefs which are poorly thought out. Lawless and Brown, in surveying research on navigation and learning outcomes, find that learners who are ë...limited in both domain knowledge and metacognitive skillsí (1997:126) may not benefit from a high degree of learner control and may get lost in the environment. They also note that such students may be beguiled by special features not central to the instruction and fail to acquire the information important tothe section. This finding is supported by Blissett and Atkins (1993), who find that the sophistication of the multimedia environment may prevent some students from taking time to reflect on what they have just learned. Yildiz and Atkins suggest that students do not cope well with multimedia if they lack ë...advance organizers or mental frameworks on which to hang the surrogate experience...they therefore had difficulty in making personal, meaningful sense of what they saw and did...í (1993:138). Laurillard (1993: 30-31) notes that university students may hold naive beliefs and that university lecturers may make erroneous assumptions about their studentsí grasp of prerequisite concepts. Unless software is specifically designed to expose naive beliefs and support the construction of more accurate knowledge, it is likely that at least some users can navigate through a product without recognising that they hold erroneous ideas.
The importance of context
Administrators of tertiary institutions in which there is an increased use of educational software to supply some of the teaching may hold a perception that the desired learning has taken place if the assigned work is accomplished. Ramsden thinks that the context of learning is very important and remarks on unintended consequences of planned educational interventions which can result in an increase in superficial learning (1992:62-63) rather than the opposite. He suggests that assessment methods may have a negative effect on student learning. If these effects are true, then an outcome of multimedia teaching could be superficial learning, just as with more traditional methods.
Some multimedia proponents suggest that experiential, visually accurate, interactive software will help users attempt to solve problems in the ways that experts would. Henderson, after a careful long-term examination of mature studentsí work with multimedia packages, states that ë...knowledge acquisition is essentially and inescapably a socio-economic-historical-political-cultural processí (1996:90) and that studentsí mental processes depend on context specificity. Thus, students from cultures different from that in which a software product is developed are likely to experience difficulties in using that product. Baumgartner and Payr state
Learning with software...is a social process in at least two ways: first, it takes place in a certain social situation (in the classroom, at work, at home) and is motivated by it. Secondly any relevant learning process has as its goal the ability to cope with the social situation (professional or everyday tasks, etc.) The evaluation of interactive media has to satisfy three conditions: 1. It has to take into account the social situation in which the media are used, and must not be limited to the media themselves; 2. It has to take into account the goal of dealing with complex social situations and must not limit itself to the isolated individual learner; and 3. It must take into account the specific forms of interaction between the learner and society. These interactions range from the passive reception of static knowledge to the active design of complex situations that characterizes the ëexpertí. (1996:32)
Ramsdenís concern with context in traditional classrooms is thus seen to be of relevance to software developers or evaluators who want educational products which will fulfil a variety of teaching/learning needs.
Conversation
Laurillard (1993) notes that ëconversationí about oneís perception of an instructional sequence is an important part of learning and gives examples of ways in which conversation can be carried out by instructors and students face-to-face (102-104) or via intelligent tutoring systems. Blissett and Atkins (1993) advocate a strong teacher role in pursuing conversation about multimedia experiences and in promoting student reflection on their learning, in part due to their finding that students may not have acquired knowledge at a deep level from that experience. Collis (1996), in agreement with these authors about the importance of conversation about multimedia learning, advocates the provision of computerised communication opportunities among the lecturer and students and among the students themselves. She believes that what she terms ëtelelearningí, even for lecturers who use reproductive/transmitting teaching methods, forces the introduction of more communication into the ëinstructional balanceí (Collis, 1996:299). Her suggestions of supplying IRC-type group discussion facilities and email communication with the instructor as part of each computer-mediated instructional event would offer students the opportunity to engage in conversation about what they are learning even while it happens. It is much more likely that learners would then stop and think about their learning if an opportunity to share it with others were offered than if they are simply carried along by a multimedia experience and not ëanchoredí to the active, cognitive world.
Summary
A number of approaches to formative and summative evaluation have been touched upon above, supported by a set of references which should help beginning evaluators check out this time- and resource-intensive area further. It is no wonder that software developers may wish to avoid formative evaluation at every step of their project, especially if they have commenced it in a wave of enthusiasm, as Hayden and Speedy have noted. Designers and developers working on large-scale projects in Europe such as the DELTA (Barker & King, 1993) have had to turn away from the fun of carrying out innovative ideas and instead establish criteria whereby the work can stand up to evaluation which, as Thorpe states, is capable of being a very public thing. The work of software evaluation is very necessary but it is also expensive if done properly. The time and money required for this aspect -- whether formative evaluation or an evaluation to see if a produced program is effective -- should be a budget item whenever software development or use is considered.
References
Alexander, S. & Hedberg, J. (1994). Evaluating technology-based learning: Which model? In K. Beattie, C. McNaught & S. Wills (Eds.) Interactive Multimedia in Education: Designing for Change in Teaching and Learning. Holland: Elsevier Science B.V.
Barker (1995). Evaluating a model of learning design. In H. Maurer (Ed.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Graz, Austria: Association for the Advancement of Computing in Education.
Barker, P. & King, T. (1993). Evaluating interactive multimedia courseware -- a methodology. Computers in Education 21 (4), 307-319.
Baumgartner, P. & Payr, S. (1996). Learning as action: A social science approach to the evaluation of interactive media. In Carlson, P. & Makedom, F. (Eds.) Proceedings, World Conference in Educational Multimedia & Hypermedia. Boston: Association for the Advancement of Computing in Education.
Biraimah, K. (1993). The non-neutrality of educational computer software. Computers & Education 20(4), 283-290.
Blissett, G. & Atkins, M. (1993). Are they thinking? Are they learning? A study of the use of interactive video. Computers & Education 21 (1/2), 31-39.
Collis, B. (1996). Tele-learning in a digital world The future of distance learning. London: International Thompson Computer Press.
Hayden, M. & Speedy, G. (1995). Evaluation of the 1993 National Teaching Development Grants. Project commissioned by the Committee for the Advancement of University Teaching. Lismore, Australia: Southern Cross University.
Henderson, L. (1996). Instructional design of interactive multimedia: A cultural critique. Educational Technology, Research & Development 44(4), 85-104.
Laurillard, D. (1993). Rethinking university teaching A framework for the effective use of educational technology. London: Routledge.
Lawless, K.A. & Brown, S.W. (1997). Multimedia learning environments: Issues of learner control and navigation. Instructional Science 25(2), 117-131.
Moses, I. & Johnson, R. (1995). Review of the Committee for the Advancement of University Teaching. URL http://uniserve.edu.au/caut/report/finrep.html
Northrup, P.T. (1995). Concurrent formative evaluation: Guidelines and implications for multimedia designers. Educational Technology 35 Nov/Dec, 24-31.
Ramsden, P. (1992). Learning to teach in higher education. London: Routledge.
Reiser, R.A. & Kegelmann, H.W. (1994). Evaluating instructional software: A review and critique of current methods. Educational Technology, Research & Development 42(3), 63-69.
Squires, D. & McDougall, A. (1994). Choosing and using educational software A teacherís guide. London: Falmer Press.
Thorpe, M. (1988). Evaluating open & distance learning. Harlow, Essex: Longman Open Learning.
Yildiz, R. & Atkins, M. (1993). Evaluating multimedia applications. Computers & Education 21(1/2), 133-139.
Young, J.D. (1996). The effect of self-regulated learning strategies on performance in learner controlled computer-based instruction. Educational Technology Research & Development 44 (2), 17-27.
Additional references:
Flagg, B.N. (1990). Formative evaluation for educational technologies. Hillsdale, NJ: Lawrence Erlbaum Associates.
Land, S. M. & Hannafin, M.J. (1997). Patterns of understanding with open-ended learning environments: A qualitative study. Educational Technology, Research & Development 45(2), 47-73.
Saunders, D. & Gaston, K. (1996). An investigation in evaluation issues for a simulation training programme. British Journal of Educational technology 27(1), 15-23.
Weston, C., Le Maistre, C., McAlpine, L. & Bordonaro, T. (1997). The influence of participants in formative evaluation on the improvement of learning from written instructional materials. Instructional Science 25(5), 369-386.
MicroSift Computer Technology Program, Northwest Regional Educational Laboratory, 300 SW 6th Ave., Portland, OR, USA 97204
(c) H Geissinger
The author(s) assign to ASCILITE and educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author(s) also grant a non-exclusive licence to ASCILITE to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the ASCILITE 97 conference papers, and for the documents to be published on mirrors on the World Wide Web. Any other usage is prohibited without the express permission of the authors.
This page maintained by Rod Kevill. (Last updated: Friday, 21 November 1997)
NOTE: The page was created by an automated process from the emailed paper and may vary slightly in formatting and layout from the author's original.