ASET 2002 conf logo
[ ASET ] [ Proceedings Contents ]

Software agents and the 'human factor' in the electronic classroom

Carolyn Dowling
Australian Catholic University
One of the fastest growing applications of AI research is the implementation of 'agent' software within a range of contexts, including education. While some of these are designed to carry out the many background tasks that support teaching and learning, others are specifically intended to substitute for the interactions between teacher and student, even between student and student, that provide the critical social dimension of the face to face classroom. Through reference both to theoretical issues and to practical examples, this paper explores the degree to which socially interactive roles within electronic learning environments can be effectively enacted by software agents, with a particular emphasis on some of the implications for interface design.


The implementation of 'agent' software is one of the fastest growing areas of AI research. Features distinguishing this type of program from others include a high degree of autonomy in decision making and action, the ability to 'learn' from experience and adapt their behaviour accordingly, and often a highly personified interface.

The appeal of agent software is multi-faceted, and varies considerably according to the context of use. In educational settings a clear distinction can be made between those applications intended to contribute invisibly and anonymously to the many background tasks that support the teaching/learning process, and those involving direct interactions with students. The latter in a very real sense become participants in the 'social' dimension of the classroom that is of such importance in contemporary pedagogical theory and practice.

The substitution of computer programs possessed of varying degrees of artificial intelligence and 'personality' for the 'social' presence of a human teacher, or indeed of a fellow student, in the computer based classroom, raises a number of questions related to the processes through which knowledge is socially constructed, and to the qualities which are necessary to ensure successful participation in those processes. To what extent can such roles be effectively and appropriately fulfilled by a computer program?

The socially interactive pedagogical agent

A useful overview of the potential functions of a socially interactive agent in the classroom is given by Johnson:
"Pedagogical agents are autonomous agents that support human learning, by interacting with students in the context of interactive learning environments. They extend and improve upon previous work on intelligent tutoring systems in a number of ways. They adapt their behaviour to the dynamic state of the learning environment, taking advantage of learning opportunities as they arise. They can support collaborative learning as well as individualised learning, because multiple students and agents can interact in a shared environment. Given a suitably rich user interface, pedagogical agents are capable of a wide spectrum of instructionally effective interactions with students, including multimodal dialog. Animated pedagogical agents can promote student motivation and engagement, and engender affective as well as cognitive responses." (Johnson 1998, p. 13.)
While the temptation to revisit older styles of instructional software in the light of new capabilities must certainly exist, it is gratifying to note in the literature that developers are attempting to base their work on aspects of pedagogical theory more acceptable to current thinking.

An interesting and relevant perspective on the participation of agents in the social aspects of knowledge construction is expressed in the work of Sheremetov and Nunez (1999), deriving from the theoretical frameworks of Piaget and Vygotsky. They write:

"The design of learning environments, virtual or not, aims to promote productive interactions. In this type of learning a student changes from being a passive information receiver to an active collaborator, interacting with the tutors and colleagues in the learning process. Learning does not only result from acquiring knowledge, solving problems or using tools, but also from interacting about these on-going activities with persons and agents." (Sheremetov & Nunez 1999, p.306)
Further, as Solomos and Avouris (1999) suggest:
"The user mental model of the system should be based on the metaphor of the 'invited professor' rather than the 'knowing everything own tutor'. ... Our first findings confirm the observation that today's users, accustomed to hypertext-like interaction, are more likely to accept this collaborative teaching metaphor, according to which their tutoring system is viewed as an intelligent hypertext browser, offering links to other tutoring systems with the right content and at the right time" (Solomos & Avouris 1999, p. 259).
The image of the teacher as a facilitator of learning rather than as the 'sage on the stage' is also reflected in such statements as:
"Each student working on the project will have an agent, operating in the background, watching progress, measuring it against the plan, and taking remedial action when necessary" (Whatley et al 1999, p. 362).
The substitution of agents for one or more of a student's classmates or fellow learners is an interesting concept being explored by a number of developers of electronic learning environments. A well known example is the work of Chan (1996, 1998) in relation to the 'learning companion' - a software entity having limited knowledge of the domain in question, conceptualised as a fellow learner with whom the student may collaborate and even disagree. As in real life, some of these learning companions may be better informed than the student in the relevant domain of knowledge, while others may know less.

The concept of the computer as 'tutee', familiar to Logo users in the eighties when 'teaching the turtle' was a popular metaphor for the activity of programming, is echoed in implementations of agents designed to be 'taught' by the student. An example is described by Ju (1998) who writes of a computer based peer tutoring system employing two categories of agent - an 'expert', and a 'learner':

"... students become active learners who are guided to learn by teaching a computer. After the students watch how the computer expert solves a set of linear equations [the program] helps the human student act as a teacher in order to learn more about the subject matter. At this time, the computer plays the role of a student ..." (Ju 1998, p. 559)
An interesting variation on this theme is presented by Sheremetov and Nunez (1999, p. 310), who describe the role of a 'monitor agent' as being to modify the role, behaviour or expertise of learning companions from that of strong group leader to a weaker companion or even a passive observer, depending on its interpretation of whether the learner requires more or less guidance.

While the examples cited above might suggest that each agent is designed to function independently, this is far from being the case. An interesting aspect of most agent based educational systems is their use of a multiplicity of agents, many of them capable of a complex range of interactions with the student, with one another, and increasingly with agents associated with other programs. These interactions range in nature from collaboration to competition, and their purposes are derived from theoretical analyses of the various component tasks and activities that make up the human activity of 'teaching'. An example is the Multiple Agent Tutoring System (MATS) described by Solomos and Avouris:

"MATS is a prototype that models a 'one student-many teachers' learning situation. Each MATS agent represents a tutor, capable of teaching a distinct subject. All MATS tutors are also capable of collaborating with each other for solving learning difficulties that their students may have" (Solomos & Avouris 1999, p. 243).
The coordination of the activities of multiple agents within the same environment occupies a considerable portion of research and development effort. Interestingly, much of the literature in this area further reflects the social dimensions of agent software by employing the metaphor of a 'society' to refer to aggregations of agents.
"The society [of artificial tutoring agents] is an open multi-agent system made up of a collection of tutoring agents that co-operate among themselves to promote the learning of a certain human learner" (Costa & Perkusich 1997, pp. 197-198).

The role of the interface

Where agents are intended to interact to a significant extent with human beings on a 'social' basis, the nature of the interface is obviously of critical importance. This is particularly the case in education, where the 'interface' between a human teacher and a student is itself so fraught with complexity and subtlety. A key element in the implementation of socially interactive agents is a high degree of personification or 'character'. It is well accepted that an element of personification of program interfaces is inevitable. As Shirk puts it:
"Although there is some dispute among software critics concerning the advisability of having 'personalities' in computer programs, their presence seems unavoidable. Any time there is communication between a computer and a human, the information presented by the computer has a certain style, diction, and tone of voice which impact upon the human's attitude and response toward the software" (Shirk 1988, p. 320).
Deliberate personification, however, is an area of great complexity. For instance, while a significant aspect of the representation of 'character' or personality is visual appearance, both research and experience suggest that a mismatch between realism in appearance and the apparent knowledge level of the agent can have a deleterious effect on credibility. In other words, the more visually realistic the representation, the higher the expectations of the user in relation to the appropriateness and 'intelligence' of utterances and actions. Agents that 'look' smart and 'act' or 'talk' dumb are poorly received by many users, who express a higher tolerance for the limitations of a 'character' more sketchily represented, for instance through cartoon-like graphics. As Masterton, writes, for instance, "A common problem with AI programs that interact with humans is that they must present themselves in a way that reflects their ability. Where there is a conflict between the ability of the system and the users' perception of that ability a breakdown occurs and users may either fail to exploit its full potential or become frustrated with its shortcomings" (Masterton 1998, p. 215). In this paper he describes the use of a degree of anthropomorphism intended to convey qualities such as friendliness and usefulness, without the implication of possession of full human capabilities (Masterton 1998, p. 211).

The work of Chan referred to earlier acknowledges this aspect of the animation of the interface in his design of pedagogical agents developed to interact with young children. Perhaps not surprisingly, animals are a popular choice of persona for such agents, as in this example of a networked learning environment for Taiwanese high school students:

"The Dalmation is having the same performance as the student. A virtual clone may do things you don't know and you cannot control. Another animal companion is Dragon, like one of those animal companions in Mulan, a Disney cartoon of this summer" (Chan 1998).
In addition to the innate attraction such characters would hold for most young students, our expectations in regard the cognitive skills of animals may well be more appropriate to the capabilities of software agents than are our experiences of human-to-human interactions.

We need to bear in mind, however, that the role of these particular agents in the educational setting is more that of 'fellow learner' than 'instructor'. Different issues apply when considering the appropriate personification of a more authoritative figure within the learning environment. Users of Microsoft Office products are all too familiar with the indignity of being subjected to the whims of an animated paper clip. While it is clearly important that socially interactive agents are animated or personified in a way that is attractive to the student (who might otherwise feel inclined to 'turn them off' - an option not available in the face-to-face classroom!), it is also important that the interface bears some relationship to the style with which a human being would enact the role in question. Here cultural differences have the potential to complicate a scenario in which the electronic learning environment is intended for use beyond the local context. The expected roles and behaviours of both teachers and students can vary widely between cultures.

A recent development in relation to the style of interface appropriate to an interactive pedagogical agent concerns attempts to replicate emotional behaviour. An agent's capacity to demonstrate some equivalent to the emotional responses of a human being, and to appropriately recognise and respond to the emotions of users is becoming recognised as an important element both in personification generally and in educational contexts in particular. As Frasson writes, "Emotions play an important role in the learning process and new strategies have to take into account this human factor for improving knowledge acquisition. Intelligent agents can hep in this process, adding emotional behaviour to believability of their actions" (Frasson 2000, p. 60).

Arguments in favour of a less fully animated or personified agent interface within educational contexts also relate to the issue of student control over their own learning. It is felt by some theorists that current trends in educational practice which favour giving more control and autonomy to the learner appear to be more in line with the thinking of researchers such as Schneiderman (1983) who favour 'direct manipulation' over the development of interactive agents with a significant degree of personification and independence of action. There is certainly a case for suggesting that pedagogical agents be configured so as to be more responsive to instruction from the user/student, and that their characteristics and capabilities should be more transparently presented. A high degree of transparency in relation to the functioning, indeed to the existence of agents within the electronic classroom may also important in relation to maintaining a level of trust which many educators believe to be an important component of any learning environment, whether computer based or face to face. There are clear ethical issues attached to the presentation of these programs in a form which students are unable to distinguish from that of a human participant in the learning experience.


But however personified the software agent, can it really be said to participate fully in the social construction of knowledge? It has been argued quite extensively that even the most heavily personified of computer programs suffer from an intrinsic lack of ability to participate in the metacognitive aspects of learning. Pufall (1988), for instance, expresses a strong belief that a computer program is unable at any level commensurate with human capacities to modify its own knowledge structures or cognitive processes, and so cannot be regarded as a co-constructor of knowledge in a meaningful sense. While this might well have been the case in relation to earlier computer based learning environments, can we continue to make the same claims with confidence today or in the future? The capacity of software to 'learn' and adapt to experience through the incorporation of new information and the appropriate modification of its representation of the context in which it functions and of its inference mechanisms, is undoubtedly increasing. If our test of full participation depends on an understanding that the agent has 'learnt' in precisely the same way that the human has learnt, then we will have difficulty accepting the electronic entity as genuine co-constructor of knowledge. If, however, we make our claim on the grounds that it appears to the human learner that the agent has participated in the learning that has taken place, then perhaps we can at least tentatively admit such a piece of software to membership of the social milieu that has mediated the educational experience.


Chan, T.-W. (1996). Learning companion systems, social learning systems, and the global social learning club. Journal of Artificial Intelligence in Education, 7(2).

Chan, T.-W. (1998). The past, present, and future of educational agents. [viewed 3 Feb 2000, not found 13 Aug 2002]

Costa, E.deB. & Perkusich, A. (1997). Designing a multi-agent interactive learning environment. In Proceedings of International Conference on Computers in Education (ICCE '97), Kuching, Sarawak, Malaysia, December 2-6, pp. 196-203.

Frasson, C. (2000). The role of emotional agents in intelligent tutoring systems. In Proceedings of the Eighth International Conference on Computers in Education (ICCE 2000), Taipei, Taiwan, 21-24 November, p. 60.

Johnson, W.L. (1998). Pedagogical agents. Proceedings of the Sixth International Conference on Computers in Education (ICCE '98), Beijing, China, October 14-17, pp.13-22.

Ju, Y. (1998). Development and formative evaluation of a computer-based peer tutoring system. Proceedings of the Sixth International Conference on Computers in Education (ICCE'98), Beijing, China, 14-17 October, pp. 559-566.

Masterton, S. (1998). Computer support for learners using intelligent educational agents: The way forward. Proceedings of the Sixth International Conference on Computers in Education (ICCE'98), Beijing, China, 14-17 October, pp. 211-219.

Pufall, P. (1988). Function in Piaget's system: Some notes for constructors of microworlds. In Forman, G. & Pufall, P. (Eds), Constructivism in the Computer Age. Hillsdale, New Jersey: Lawrence Erlbaum Associates.

Schneiderman, B. (1983). Direct manipulation: A step beyond programming languages. IEEE Computer, 16(8).

Sheremetov, L. & Nunez, G. (1999). Multi-agent framework for virtual learning spaces. Journal of Interactive Learning Research, 10(3/4), 301-321.

Shirk, H.N. (1988). Technical writers as computer scientists: The challenges of online documentation. In Barrett, E. (Ed), Text, Context and Hypertext: Writing with and for the computer. Cambridge, Massachusetts: MIT Press.

Solomos, K. & Avouris, N. (1999). Learning from multiple collaborating intelligent tutors: An agent-based approach. Journal of Interactive Learning Research, 10(3/4), 243-263.

Whatley, J., Staniford, G, Beer, M. & Scown, P. (1999). Intelligent agents to support students working in groups online. Journal of Interactive Learning Research, 10(3/4), 235-243.

Author: Carolyn Dowling is an Associate Professor and Head of the School of Business and Informatics (Victoria) at Australian Catholic University. Her teaching and research interests have focussed at different times on a range of HCI issues, social and ethical aspects of computing, Logo, virtual reality in education and training, software agents, computer mediated writing and aspects of Internet use including computer mediated communication and learning. Email:

Please cite as: Dowling, C. (2002). Software agents and the 'human factor' in the electronic classroom. In S. McNamara and E. Stacey (Eds), Untangling the Web: Establishing Learning Links. Proceedings ASET Conference 2002. Melbourne, 7-10 July.

[ ASET ] [ Proceedings Contents ]
This URL:
Created 10 Aug 2002. Last revision: 10 Aug 2002.
© Australian Society for Educational Technology