Full Paper |
Back to
List of papersALLIANCE A Student's Best Friend
Tom Fenton-Kerr, Tony Koppi, Marcel Chaloupka, Stephen Cattle, Alison Anderson
tfk@nettl.usyd.edu.au
NeTTL, Centre for Teaching and Learning, University of Sydney
Abstract
Part of the brief of the New Technologies in Teaching and Learning (NeTTL) group at the University of Sydney is to conduct research into models of learning and supporting software. A model for collaborative learning in a networked environment currently under development has provided the setting for rich areas of research into ways of using information technology (IT) to leverage the learning process. This paper describes an effective approach, which sees an electronic assistant in education (ALLIANCE, an Agent for Leveraged Learning In A Networked Collaborative Environment) as an adjunct to traditional forms of instruction with the potential to provide support or scaffolding where appropriate in achieving a learning task more efficiently. Its main focus is on the middle ground between users (both instructors and students) and the learning software, i.e., the software interface.
Introduction
Interface design in its most general sense covers issues as broad as: presentation of mixed media forms, knowledge navigation, and the provision of support in the form of on-line help files, guides and software agents. These support items constitute a range of software programs that in an educational context are designed to aid the completion of a learning task in some way. Collectively, they form what we term an ALLIANCE (an Agent for Leveraged Learning In A Networked Collaborative Environment) which is able to undertake a range of tasks, each of which adds to the possibility of success in a given field.
Software Learning Agents
We should first distinguish what we mean by a software learning agent in this particular context. Here, the phrase is employed in the sense of a piece of software used for an educational purpose, as opposed to software specifically designed to learn or acquire new knowledge by itself. This is not to suggest that certain manifestations of agent-based software are incapable of such acquisition: in fact, many learning agents must adapt and 'learn' if they are to be capable of serving their users' learning needs effectively.
The Macquarie Dictionary defines an agent is "a person acting on behalf of another", or "a natural force or object producing or used for obtaining specific results". While software agents could hardly be classified as "natural", they are certainly used for obtaining results, and, in the case of autonomous learning agents acting remotely, can be said to be acting on behalf of another entity, such as a learner, teacher or other software. IBM's White Paper, The Role of Intelligent Agents in the Information Infrastructure (Grosof et al., 1995) uses an interpretation that is close to the dictionary definition of agency mentioned above, in a rather more specialised context, defining agents as:
... Software entities that carry out some set of operations on behalf of a user or another program with some degree of independence or autonomy ...employ[ing] some knowledge or representation of the user's goals or desires.
This latter definition fits within our concept of a software learning agent, incorporating, as it does, the elements of agency, intelligence, and in certain circumstances, mobility.
Wooldridge and Jennings (1995:2) describe software agents as having traits such as autonomy, social ability, reactivity, and pro-activeness. Reactivity is a key defining trait for Wooldridge (1997:48), i.e., the ability to "modify its behaviour as appropriately as environmental circumstances change", along with some sense of goal-directed behaviour is what distinguishes an agent from other software programs that may use structured programming or are designed to be user friendly.
Taxonomically, software learning agents fall within the domain of artificial intelligence (AI) as applied to educational settings. An overall view of software agents, classified according to their functionality, might include the following:
Obviously, there is a large amount of overlap between the functional classification of agents mentioned above. A learning agent that acts autonomously in searching a network on a user's behalf and then presents its findings in an adaptive format makes use of several of them. In doing so, it may need to call on other agents, resources and algorithms as represented in Figure 1., below. One element that appears to be common to all learning agents, however, is their position on the learning map: between the user and the information required to complete a learning task.
To summarise; a software learning agent is a piece of software that acts in an intelligent fashion to assist a user with a learning task. In so doing it may display autonomous behaviour, react in a useful way to user input, and sometimes act on a user's behalf (for example, at a remote location). Some agents may be capable of pro-active behaviour in the attainment of an educational goal. Agents may also display human-like traits such as desires, motivations, beliefs about a world-state, and personality characteristics. These latter characteristics may be represented to the user in graphic, aural or text forms.
Research Overview
One of the basic questions this research is attempting to answer is: can software programs such as learning agents and their variants be created that assist learning in a problem-based task? The same question on an analytic level might be: what would such software have to do to accomplish such a task? Suggestions include:
Figure 1. A possible scheme of a software learning agent in AI in Education - providing a link between learners, resources, other forms of assistance and a learning task.
The scope of some these tutoring strategies is obviously far too broad to implement in a single software program, but they have all served to inform the design process for learning agents prototypes, allowing us to keep our pedagogical objectives in some sort of reasonable perspective.
Current software research at NeTTL has focussed on the development and application of the ALLIANCE program to augment a software program being developed for the effective learning of the Australian Soil Classification (ASC) system (Isbell, 1996). The ASC software project (working title: Australian Soil Classification On-line Tutor - ASCOT) was undertaken by the University of Sydney's Department of Agricultural Chemistry and Soil Science, together with NeTTL, as a response to the publication of a new hierarchical soil classification system for Australian conditions. This has since been adopted as the standard classification system for Australian soil. The program is problem-based, and provides undergraduate students with the resources and training necessary to allocate a soil profile to a particular soil class. Students need to interrogate a database of soil profiles and relevant soil horizon data then allocate an appropriate profile by reference to an index of classification criteria, given by Isbell (1996). This process involves reviewing both graphic and text resources. Users of the software have the option of classifying the given soil profiles to anywhere between the second and fifth hierarchical levels of the classification. ASCOT has been designed to encourage collaborative learning, with undergraduate students using the program in small groups during class time. Individuals accessing the database can take advantage of the ALLIANCE self-assessment routines to monitor their own progress at any time.
Design Issues
Workshops run during the design phase of the ASC project identified the need for the incorporation of an intelligent help system. Given the nature of the learning task, and the program's use of rather complex and arcane notation, a help system that could identify erroneous or irrelevant responses and offer contextual help was seen as essential to the project's success as an effective learning tool.
The workshops also highlighted a number of key questions about aspects of agent design and implementation. What scope of responses would the learning agent attempt to deal with? Would the agent be motivated to step in when an inappropriate response was detected, or would it only appear on request from a user? Having tracked the student's use of the software together with input, would it offer a review of progress to date? What levels of hinting would be appropriate?
Another design issue centred on look-and-feel aspects. That is, how should an agent manifest itself (and need this be a fixed mode)? Are there advantages in using an animated graphic representation with all the inherent design overhead, or would a simple text box suffice? Should the agent have a voice?
In order to resolve these queries we went back to the learning objectives, which were fairly straightforward:
A substantial literature review showed us that other researchers had grappled with many of these design issues and come up with apparently workable solutions. Murphy and McTear (1997) use a Learner Model, which stores learner characteristics that are relevant to the tutoring strategies employed by their computer aided language learning (CALL) system called CASTLE. The system's Tutoring Module contains a series of adaptive templates, allowing it to tailor representations of grammatical and linguistic concepts for each learner. It also has an inference engine, which provides the system with a current assessment of the user's proficiency so that the system knows when to intervene and offer remedial help, and what level of help to offer. Three hinting levels are available on demand from a user, each of which affects the user's subsequent proficiency level. Level 3 help, for example, supplies the correct answer, and lowers the proficiency level for that topic the most. Input errors are recorded as a user progresses through an exercise. Exceeding an error threshold (e.g. making three errors of the same category) causes the Tutoring Module to recommend a remedial exercise, based on the current proficiency level and the system's understanding of the required prerequisite knowledge. In addition, a combination of proficiency scores gives learners a measure of overall proficiency in the subject. This dynamic approach to measurement of a user's proficiency is employed by several intelligent tutor programs, enabling them to track progress and provide highly contextual help at the appropriate time.
The current prototype implementation of ALLIANCE adopts a rather simplified version of the coached problem solving approach used by the ANDES program to teach physics. As the knowledge domain is highly specialised, it was decided that the predictive analysis techniques (based on Bayesian networks) used by ANDES were not necessary to provide motivation for the agent software. The organised nature of the knowledge map and its logical sequencing of classification steps means that the software can make use of basic heuristics and logging techniques to determine simple facts about a user's proficiency and current level of completion. Users currently have total control over the agent software, which can be disabled at any time.
ALLIANCE is motivated to intervene when a user has made an attempt at classification that is incorrect. Given this input, together with its knowledge of what resource sites the user has already visited, and its own understanding of the prerequisite knowledge required for the current topic level, it will suggest a 'generic' site where the missing knowledge can be found and reviewed. The agent avoids directing or moving the user directly to the specific site where possible, usually making suggestions that use a general classification term, such as ' I think you should review the Physical Properties for this soil'. This allows students to discover fact-holding sites for themselves, and gives them practice in navigating the various resource categories.
The representation of the ALLIANCE agent as a graphic entity fulfils two needs. First, it accords well with Bates' (1994) notion of agents being 'believable' and therefore accepted as suitable tutorial assistants. Bates' Oz project explored the use of simple graphical agents driven by underlying programmed 'emotions'. Second, it fits within ASCOT's organisational approach of using HTML pages that combine resources such as a database of soil images with text-based courseware sections. While the ALLIANCE agent has a characteristic mode of expression, its 'emotions' are quite limited, being (presumably) inferred from its physical actions. We avoided choosing graphic forms that were too anthropomorphic, as users tend to imbue human-like agents with unrealistic abilities never intended or programmed by the system creators. The current prototype appears as an animated earthworm ('Al'), which can communicate with both speech-balloon text and text-to-speech output. As Al can be positioned dynamically anywhere on the page, gestural information (such as indicating specific soil components) can play a part in the interaction, enhancing or replacing text responses.
Fig. 2. Screen-shot of prototype ALLIANCE agent invoked from the ASCOT Introduction Page
In its job of augmenting the learning experience, ALLIANCE undertakes a number of tasks such as:
Conclusion
At the current level of development, ALLIANCE attempts to augment the stated learning objectives of the ASCOT tutorial program in what we believe is a realistic and realizable way. We have chosen a software architecture that can be served across any network, opening up possibilities for both distributed and collaborative learning modes. We anticipate that the experience gained in researching and implementing various ALLIANCE agent prototypes will translate to other disciplines and alternative modes of course delivery.
The effectiveness of learning agents such as ALLIANCE in offering useful (and acceptable) help to a student will be the subject of an evaluation phase of the authors' research into learning agents in general. The logging feature used by ALLIANCE offers students, educators and evaluators valuable feedback on how the tutorial program is being used, and how it might be improved.
Future developments in this area of artificial intelligence will likely concentrate on the use of different media for human-computer interaction. We suspect that given the choice, users will prefer to have tutorial assistance transmitted to them in aural form, leaving them free to concentrate on the subject matter at hand. Recent advances in continuous speech recognition may mean that a large part of assistance/response interactions are conducted as spoken conversations. This would be particularly pertinent in language learning settings. An ideal agent tutor would operate as transparently and unobtrusively as possible, motivated by a need to assist rather than judge a student's output.
References
Bates, J. (1994) The Role of Emotion in Believable Agents, in Communications of the Association for Computing Machinery, special issue on Agents (Ed. D. Riecken)
Boden, M. A. (1994), Agents & Creativity, in Communications of the Association for Computing Machinery, special issue on Agents (Ed. D. Riecken)
Conati, C., Gertner, A., VanLehn, K. and Druzdze, M. (1997), On-Line Student Modeling for Coached Problem Solving Using Bayesian Networks, in User Modeling: Proceedings of the Sixth International Conference, UM97. Vienna, Springer, Vienna New York
Grosof, B. et al. (1995), The Role of Intelligent Agents in the Information Infrastructure, IBM White Paper, located at:
http://www.research.ibm.com/iagents/publications.htmlIsbell, R. (1996), Australian Soil Classification, CSIRO Publishing, Collingwood, VIC
Murphy, M. and McTear, M. (1997), Learner Modelling for Intelligent CALL, in User Modeling: Proceedings of the Sixth International Conference, UM97. Vienna, Springer, Vienna New York
The Concise Macquarie Dictionary (1982), Doubleday Australia: p. 33
Wooldridge, M. and Jennings, N. (1995), Agent Theories, Architectures and Languages: a Survey - in Intelligent Agents, Springer-Verlag, Berlin: p. 2
Wooldridge, M. (1997), Agents as a Rorschach Test: A Response to Franklin and Graesser, in Intelligent Agents III, Wooldridge, M., Jennings, N. and M¸ller, J. (Eds.), Springer-Verlag, Berlin: p. 48
(c) Tom Fenton-Kerr, Tony Koppi, Marcel Chaloupka, Stephen Cattle, Alison Anderson
The author(s) assign to ASCILITE and educational and non-profit institutions a non-exclusive licence to use this document for personal use and in courses of instruction provided that the article is used in full and this copyright statement is reproduced. The author(s) also grant a non-exclusive licence to ASCILITE to publish this document in full on the World Wide Web and on CD-ROM and in printed form with the ASCILITE 97 conference papers, and for the documents to be published on mirrors on the World Wide Web. Any other usage is prohibited without the express permission of the authors.
This page maintained by Rod Kevill. (Last updated: Friday, 20 November 1997)
NOTE: The page was created by an automated process from the emailed paper and may vary slightly in formatting and layout from the author's original.