ASET logo
[ EdTech'92 Contents ] [ EdTech Confs ]

Developing evaluation and design methodologies for multimedia based learning

Leonard L. Webster
Monash University Distance Education Centre
Given the nature of a multimedia environment: that is, one that uses a combination of media such a hypertext, image, sound, animation and video; the emerging use of multimedia in education and training; an alternative evaluation and research methodology with a range of applicability is required to fully ascertain the instructional potential and effectiveness of the technologies. This paper reports on the development of an enhanced case study technique which uses case study methods combined with the use of video, computer log files and 'speak aloud' techniques to provide qualitative data. The benefits and developmental problems of the method will be discussed along with examples of studies attempted so far. Some recommendations for the ongoing development of this method will be reported.


With the emergence of multimedia in education and training applications, research into how the various multimedia configurations affect different types of learners as well as identifying important elements of instructional design is required. The method (Carlson 1991) should be able to be used with an entire instructional system in real life educational settings and not necessarily limited to controlled laboratories investigating isolated parts of systems.

The literature identifies the need to research and evaluate the educational and training multimedia environment using new strategies and from perspective's that include the human computer interaction and learning theories (computer applications providing a unique method of observing the learners decision processes). Carlson (1991) recommends research into instructional design and matching of learning styles to programs whilst Park (1991) suggests learner control principles and strategies; selection of information representation forms; use of hypermedia as a CBI Knowledge Base and development of an intelligent Hypermedia as suitable areas.

Reeves (1991) suggests that investigations of multimedia should include observational and regression methods and recommends these because of the exploratory nature of research that must be done. Observational studies are needed to identify the salient variables, whilst regression and computer modelling methods may be employed to explore relationships among specified variables.

Park (1991) also argues that behavioural psychological paradigms and procedures used in traditional audio visual research, where instruction by one medium is compared to another, are not appropriate for studying the instructional applications of modern interactive computer technology. Procedures and strategies for dynamically selecting information representation forms during the ongoing process of instruction should be researched.

Exploratory Sequential Data Analysis (ESDA) for enhanced case study methodology

One method that provides identification of salient variables for analysis, is an enhanced case study method. The case study method is enhanced by video, log files[1] and interviews and provides a collection of sequential data from which analysis may be done. This paper explores this method of data collection and the types of variables that are being generated.

A pilot study involving the provision of distance education materials on a CD-ROM (McNamara and Webster 1992) used a case study approach to gain insight into learning processes in a hypermedia environment and to identify various types of data and observations that could be made from an observational approach to research in multimedia. From this, further refinements were made to the method which included incorporation of log files and camera 'views' that were most appropriate to record the interactions of the participants.

Romiszowski (1990) suggests the use of case study approaches for situations which require variety, complementation of traditional evaluative strategies, bridging theory and practice and development of evaluative skills. These categories provide an appropriate framework for research into multimedia.

Other researchers have also explored the use of case study with video. Cobb (1987) conducted analysis of video taped interviews to test hypothesis in real educational settings. Krajcik (1988) used the recorded video output from a microcomputer and student's verbal commentary via a microphone input. This technique allowed student's comments about their observations, perceptions, predictions, explanations and decisions to be recorded simultaneously. Kidder and Judd (1986) suggest that interview techniques provide opportunities to correct misconceptions, misunderstandings and clarify or probe inadequate or vague responses. It can be suggested that the storage of these methods on video tape allows further analysis by collecting data including the subtle forms of expression, for example, gestures, expressions, hesitancies.

Method

The method used in this study comprises gaining data from videotaping the session, log file generation, time display and interview. Prior to the session participants were required to complete a biographical questionnaire and were given tasks to complete during the session.

The participants were seated at a computer workstation comprising of computer, monitor, keyboard and mouse. The workstation has used both PC and Macintosh computers, the former being used in a pilot study, the Macintosh in more recent developments. This change in platform was dictated by practical circumstances of availability of suitable software and hardware whilst retaining important commonalities.

Cameras were positioned so that two of the three were not intrusive to the view of the participant with the third being located a distance away so as to be as unobtrusive as possible. The video equipment was placed on the workstation as a component of the research environment. The images were recorded simultaneously to a give views of the participant, keyboard and mouse, and screen. Superimposed on this image was a time display accurate to .01 of a second.

The cameras were positioned to capture the screen fully and avoid any obstruction by movement of the participant. To achieve this, two cameras were dedicated to the screen and participant. The third camera was mounted overhead to provide a clear view of keyboard and mouse. The resultant image was recorded simultaneously giving the three views. The limitation of using a camera to capture screens of the test instrument was as a consequence of using available technologies. It is acknowledged that more sophisticated technologies are available to take the monitor output direct to a video tape source.

Throughout the recorded session participants were asked to 'think aloud' to assist in the clarification of their responses and provide insight into their thinking processes.

A log file was incorporated to time stamp the actions selected by the participant. The log file recorded the time of action, screen identification and command selected. A further file was recorded which displayed the history of the participant giving title of screen and duration since last visited. This was used to supplement information or clarify the screen identification from the recorded image.

On completion of the session the participants are interviewed. These questions sought responses to the program by the participant, identified difficulties and where possible, clarify responses observed during the session. These responses were taken in note form although future developments will include the use of audio taping the interview. Further interviews may be required following detailed analysis of the data.

The data gathered by the log file can be used in generating graphs or tables to display the comparison between the variables of screen identification, time, frequency of use of screens or commands and in identifying hesitations and common pathways. The data was displayed as a text file under a word processing/spreadsheet package and provided easy manipulation of the data.

The test instruments varied from an Electrochemistry program (Pilot study) to a hypertext style of program on the Macintosh which incorporated navigational mapping strategies. This change of test instrument was dictated by practicalities of available technology and compatible software whilst retaining important commonalities. The Electrochemistry program is part of a first year Applied Science degree and included hypertext, images, animation and a graphical user interface with an icon bar offering twelve functions. The Hypertext program was produced using HyperCard and utilised mapping strategies for navigation through the information. Both programs are available on CD-ROM.

Participants were set tasks for each of the studies. Initially, the task was for students to familiarise themselves with the material and prepare for a formal assessment of the material contained within the Electrochemistry program.

For the report, the task comprised five questions, two of which were open ended and required subjective responses. The other three questions were objective in style and required finding specific information. The three objective questions were given to provide participants with tasks that allowed development of the navigational strategies required by the program and to build confidence.

Discussion

The studies indicate that this methodology provides a method of going beyond the learning interface to reveal the cognitive processes that are being used. It is important to identify some issues that are raised as a consequence of this approach.

Pre session training may be required prior to involvement by participants in hardware to be used, types of software and think aloud procedures. The type of hardware and its demands on participants needs to be identified and suitable training provided so that routine operational functions do not intrude into the sessions. This may address aspects of mouse manipulation, keyboard entry of information and adjustment of screen intensity, angle and introductory setup of materials. The use of think aloud procedures is useful, but effective use demands that voice levels are adequate and thoughts are expressed as clearly as possible and not muffled or slurred. Some of this may be overcome with a personal microphone located on the participant. Krajcik (1988) comments that when students did not verbalise their thoughts and observations, the investigator would provide prompts with comments such as "please share your thoughts with us" or "please say what you are doing". As much as possible, the investigator allows participants to work through the programs without interference but encouraging them to think aloud. This area requires skilful intervention by the investigator and judgement so that there is not an overindulgence in interruptions.

Preparing the participant and providing background information and insight into investigation aims may influence results. If participants in the study know what outcomes are anticipated, there may be impact on the outcome or results obtained. This raises the question of the risk of taking the results out of context and applying them to aspects of multimedia training and learning inappropriately. This will be more accurately assessed as patterns and trends become identified over a number of cases studies.

Conversely, there is a research responsibility to identify organisational preliminaries, to place the sessions in context and gather important biographical data along with the opportunity for participants to become familiar with the type of environment in which they are working. This would include the opportunity to raise questions and discuss aspects of the session that they felt required clarification. This pre-session time was ideal for encouraging the tone of interaction between the researcher and participant and establishing a personalised environment in which the participant was relaxed and could more accurately reflect the type of workstation environment that they would encounter in actual use of a multimedia type program

The personalised environment was enhanced further by the presence of only the participant and investigator during the sessions. The participants appeared relaxed in the environment and not perturbed by the extra amount of recording equipment which surrounded them. All participants involved themselves quickly in the program with only a little prompting required by the investigator for explanation or reinforcing thinking aloud procedures. In future sessions this will be monitored more carefully as it provides insight to the cognitive processes that are being observed and explains hesitancies and decision making processes.

There are some encouraging observations and trends are emerging at the early stages of trailing this method. These include factors relating to technology (Table 1), learning (Table 2) and contextual related issues (Table 3).

Table 1: Technology factors

  1. Ability to utilise program features
  2. Influence of prior computer experience
  3. Computer style interface preferences
  4. Use of the mouse in a tracking role
  5. Reference to help systems

Table 2: Learning factors

  1. Confidence
  2. Hesitancy
  3. Confusion
  4. Satisfaction
  5. Efficient and inefficient learning
  6. Periods of inactivity
  7. Periods of high learning activity
  8. Periods of loss of interest
  9. Time to reach significant points of knowledge
  10. Desire to know limits of system
  11. Exploration vs direct pathways
  12. Linear vs browsing
  13. Eliminations
  14. Synthesis of information
  15. Confirmation of existing knowledge
  16. Strategies
  17. Interest and motivation levels

These listings and categories are preliminary indicators and may be modified as further analysis is performed. Although video analysis can be extremely time consuming and difficult, the use of log files and dialogues provide 'signposts' to segments of the session that are of interest to the researcher. If, for example, the log file indicates a lengthy time on any one screen that segment can be viewed on tape and analysis made of the cognitive processes or other factors that occurred. The sources of data are seen as providing a number of perspectives from which the researcher can draw on to assist analysis of specific areas of interest.

Table 3: Contextual factors

  1. Structure of menus
  2. Participant's understandings of instructional statements
  3. Return to known position/screens
  4. Most common pathways
  5. Tools and features used/unused
  6. Attitudes to design

Conclusion

Whether this method does adequately tap into learning styles and cognitive processes will become more clear with further case studies. However, where human interaction with a computer is concerned, this method does provide a realistic environments in which to undertake further research in areas discussed in this paper. If multimedia is to be a worthwhile educational technology it must be meeting the needs of a variety of learning styles in a highly personalised human-computer environment.

Endnote

  1. A log file is a computer file that may contain some or all of the following information: time; screen identification; commands used.

References

Cobb, P. (1987). An investigation of young children's academic arithmetic contexts. Educational Studies in Mathematics, 18, 109-124.

Kidder, L. & Judd, C. (1985) Research Methods in Social Relations. Holt, Rinehart and Winston, NY.

Krajcik, J. S., Simmons, P. E., Lunetta V. N., (1988). A research strategy for the dynamic study of student's concepts and problem solving strategies using science software. Journal of Research in Science Teaching, 25(2), 145-155.

Park, O. (1991). Hypermedia: Functional features and research issues. Educational Technology, August, pp24-31.

Romiszowski, A. (1990). The Case Study Methodology. Interactive Media and Instructional Design. 7th International Conference on Case Method Research and Case Method Application, University of Twente, The Netherlands.

Seels, B. (1989). The instructional design movement in educational technology. Educational Technology, 29, 11-15.

Webster, L. & McNamara, S. E. (1992). Power at my fingertips but which button do I press? Researching and evaluating learner interactions in hypermedia. Proceedings of the International Interactive Multimedia Symposium. Perth. pp285-297.

Author: Leonard Webster is a Course Developer with Monash University Distance Education Centre, Churchill, Vic 3842.
Email: lenw@giaea.cc.monash.edu.au

Please cite as: Webster, L. L. (1992). Developing evaluation and design methodologies for multimedia based learning. In J. G. Hedberg and J. Steele (eds), Educational Technology for the Clever Country: Selected papers from EdTech'92, 73-78. Canberra: AJET Publications. http://www.aset.org.au/confs/edtech92/webster.html


[ EdTech'92 contents ] [ Ed Tech Confs ]
This URL: http://www.aset.org.au/confs/edtech92/webster.html
Last revised: 10 Sep 2003. HTML editor: Roger Atkinson
Previous URL 20 Feb 1999 to 30 Sep 2002: http://cleo.murdoch.edu.au/aset/confs/edtech92/webster.html