ASET logo
[ EdTech'88 Contents ] [ EdTech Confs ]

Plenty of chaff - some wheat: Evaluating CBT authoring packages for industry and education

Peter Hosie
Training Program Designer
State Energy Commission of WA (SECWA)

Duncan Jamieson
Human Resource Development Consultant
Department of Computing and Information Technology
Government of Western Australia

Choosing a Computer Based Training (CBT) authoring language is a daunting task. With in excess of one hundred major packages on offer for microcomputers, errors are costly in terms of scarce capital, opportunities foregone, and human resources wasted. Selection criteria must be matched with learner needs and training requirements of an organisation and not be technology driven. This paper will trace the process involved in selecting a package for a large government statutory authority. The selection process documented will assist other large government and private organisations rapidly select a suitable CBT authoring system.

Perhaps the most logical starting point for this paper is to clarify what will not be included. Conference papers about CBT tend to routinely consider "central issues" of concern. While there is no denying the importance of such concerns there is a case for not waiting for these issues to be satisfactorily resolved before attempting to select a Computer Based Training (CBT) authoring system.

For the purpose of this paper it is taken as established that authoring languages for CBT represent, at present, the only realistic way of productively developing the majority of CBT applications. Some will quibble with this assertion but this debate can be taken up in another forum. If you're not quite sure what a CBT authoring system or language is then Appendix 1 should offer some joy. No attempt will be made to argue the pros and cons of using source code as opposed to authoring languages for developing CBT. A summary of these issues is provided in Appendix 2 for those unfamiliar with this debate. For the novice it is worth mentioning that we are hunting for a complete authoring system not simply a computer language.

At a more fundamental level there will be no attempt to justify the use of CBT in comparison to traditional delivery methods. However, there is sufficient instructional, economic, technical and organisational indicators to suggest a reasonable success rate for CBT. Suffice to say that CBT works when it is well designed and used in the appropriate context. Meta-evaluations of recent and established research roundly support this assertion (Wilson, 1987).

Consideration of important learning factors such as: interactivity, colour, motivation, remediation, text and graphics, course completion rates, feedback and so forth will be discussed when they directly impinge on the software or hardware being considered. No attempt has been made to present a thesis apart from restating what should be apparent - learner requirements should be of primary concern when selecting a CBT authoring system.

So what gets the nod?

In essence this is a documentation of a "real life" evaluation to select a CBT authoring system. Rather than being sponsored or conducted in academic environment this evaluation documents the actual process of selecting a CBT system. As such the considerations mentioned here are a product of "real world" joys and constraints.

Thus all the pressures of selecting a system with inadequate time, human resources, equivocal support for implementing CBT by management and the need to incorporate prior technology commitments in a large (5000+ employees) government statutory authority will be evident. Hopefully the results will be of value, although probably not directly transferable, to other medium to large government organisations as well as private concerns.

Where to start?

There are two basic ways of selecting a CBT authoring system. Perhaps the simplest way to approach the task is evaluate the available systems, decide what is affordable and go ahead and purchase the goods. The other method is to decide what your organisation's training requirements are and then begin the search for a system.

Wanted: One all purpose CBT authoring system

An impossible request but we made it just the same.

We are aware that no CBT authoring system could be all things to all people. There was no task compelling enough to warrant the selection of a dedicated system with specialist capabilities.

Future systems

Undoubtedly tomorrow's authoring systems offer exciting possibilities. As enchanting as so called artificial intelligence languages and such are, or will be, this evaluation is only concerned with what can be purchased and implemented now, not in a decade as detailed by Kozma (1986).

Feasibility study, needs assessment and all that stuff

Rarely does a textbook about educational technology get into print without blithely extolling the virtues of conducting a training needs assessment before proposing a solution. Of course this should also be a basic tenet to abide by when selecting a CBT authoring system - specifically define the needs then select the system of your dreams. But, what happens when CBT is already being successfully developed and meeting well established needs as in this case? Clearly a fully blown feasibility study, beginning with base line information was inappropriate.

Actually the initial rationale for selecting a new CBT system was primarily to upgrade the existing CBT machine and software which was acquired before the current generation of microcomputer based CBT authoring systems were widely available - five years ago.

In the wash-up we settled for a re-examination of the initial reasons for introducing CBT.

Some original and continuing reasons for using CBT

  1. There continues to be a shortage of highly skilled engineering and technical staff to conduct courses for "users" of the technology. In this context we are referring to training mainly for non technical users as opposed to purely technical training. Moreover, these staff are practitioners (designers) not instructors. Hence, engineering and technical skills are a scarce commodity which cannot be spared for user training although such staff are available to act as consultants for CBT training initiatives

  2. Most training was specific to the organisation's requirements and has large audiences. The training was of a technical user orientation therefore it needed to be consistent, replicable, readily up dateable and (hopefully!) motivational. Some of the training was to be in areas critical to the organisation's operations, therefore competency level training is involved.

  3. Employees requiring training were geographically dispersed and had a wide range of educational backgrounds and abilities. Travel costs in a state two and a half million square kilometres in size quickly kill many worthwhile traditional training initiatives before they become reality. As many staff were involved in critical operations, interruptions to their work schedules were not only disruptive but potentially disastrous. In short, taking the training to the client and making it available when it suited them to have it was clearly the way to go.

  4. The only sure way of obtaining realistic, high resolution images on a television screen in 1983, when the first CBT system was considered, was to use video. Therefore, a dedicated interactive videotape system was selected as an alternative to a CBT system capable of controlling video. This has proven to be extremely well regarded by clients. Video also allows access to the full range of human audio and visual senses, which are still not readily available to CBT. Any replacement CBT system must offer support for videotape or disk control.

  5. Information technology plays an important part in the operation of the organisation - in administrative areas as well as technical, engineering and operations, with over 200 microcomputers in use. The problem facing the authors was how to capitalise on such a ready acceptance of technology without compromising instructional quality.

  6. A great deal of the training will be delivered system wide. Therefore the ability to operate on readily available microcomputers, which many employees already know how to use, is appealing and also substantially increases the potential delivery hardware available.

    Of more consequence, however, are the pragmatics of how this aids a system wide adoption strategy. MS DOS is the most widely used microcomputer standard and there is a considerable amount of software available off the shelf.

  7. Productivity is being taken seriously by government organisations, if corporate plans can be believed. If CBT can achieve training in 30 per cent less time (Kearsley, 1983) then the wages cost savings are theoretically huge for a large organisation.

    Conducting training for new employees continues to lead to poor utilisation of human resources and could be more appropriately delivered using CBT thus freeing training staff to develop specialised courses not suitable for delivery using CBT. In effect a "teacher substitute" was required for certain well defined tasks.

This evaluation could be criticised for not attempting to canvas other sections of the organisation to ascertain their CBT requirements. In reality this organisation is so large and diverse that gaining consensus would have added significantly to the time and cost of the selection process. Moreover, one large section of the organisation had already made a long term commitment to developing a CBT system that was unsuitable for our purposes. Clearly there was nothing to gain from attempting to convince them to adopt another system which may not be an improvement on the only one they are already getting good service from.

Like many other government departments this organisation is in the advanced stages of being functionally re-organised. In a climate of uncertainty it is unrealistic to expect people to be enamoured of joint ventures when the long term pay offs are uncertain. Regardless of these socio-political impediments the diverse nature of the work undertaken mitigated against selecting a CBT authoring system that would satisfy everyone's requirements. As Gery (1987) observes it is not unusual for large organisations to select a number of radically different (and incompatible) CBT systems to suit the various types of training being undertaken.

Learner requirements

If organisational training needs are the primary selecting criterion for a CBT authoring system then learner requirements must be pre-eminent. Every effort needs to be made to translate these requirements into courseware designed. To do this student preferences must first be established. In a recent review of CAI research Wilson (1987) reported that students prefer: For a CBT authoring system to assist designers produce courseware akin to learner preferences it must have certain technological capabilities. Quality courseware is most effectively developed by providing courseware designers with the best tools available (Freedman, 1984).

As Kearsley (1983, p.14) observes computers have certain advantages in instructional settings, such as: permitting "students to learn at their own pace, individual learning styles are considered, resulting in increased student satisfaction. Most importantly, there is more control over learning materials and learning process".

The key: Interactivity

"Interactivity is CBT's raison d'etre" (Gery 1987, p.42). Got it in one.

Bork (1984) agrees that computers are going to be an important factor in all human learning because they make learning truly interactive for large numbers of learners on a cost efficient basis. Bork (1984, p. 3) also observes that"... Evidence suggests (Hartley, 1981) that student control over learner strategy is the most efficient approach to CAL design. Encouraging individual routes through information will assist students to become more actively involved in the learning process". Learners "should be given as much control as possible over programs, or at least opportunities for regaining control at some stage of the instructional sequence". (Hosie, 1987, p. 7).

Few educationists should need to be reminded that learning is not a passive process. Understanding and knowledge involve active processing rather than passive reception.

A consistent claim made in favour of using IV is that "it changes the student from passive observer to active participant" (Anandam and Kelly, 1981, p.3). The ineffectiveness of modern media as a learning device (in comparison with the written text) is due to the lack of opportunity for interrogation, and allied to this observation is the criticism that the learner loses control over the pace of instruction (Clark, 1984).

Magnitude of the task

Since 1982, Data Training has been evaluating CBT authoring systems and courseware vendors. Beginning with 12 systems in 1982 to 68 in 1986 the list ballooned to 93 in 1987. As a consequence nine courseware vendors surveyed in 1982 reached 114 in 1986 and 150 in 1987. A further 12 packages known about or mentioned in other publications were added to the list bringing the total to 105. The offerings continue to expand which is either paradise or a prospective client's nightmare.

Figure 1

Figure 1: Growth in CBT systems and courseware vendors

According to Data Training (p.5) "It isn't just the number of systems that keeps growing either, it's their capabilities... the variety of features and options is fast outstripping our ability to report on them succinctly". If the experts can't keep pace what hope is there for neophytes to the game?

A publications like Data Training "A Resource Guide to Computer Aided Instruction" is of inestimable value for initially sorting which system/s to choose. But such a publication does, by its own admission, have shortcomings. There is no attempt to rate the quality of the features mentioned, only their presence or absence - essentially a binary analysis. Despite this limitation such information saves considerable brain and legwork - permitting unsuitable products to be eliminated on the initial screening

The process

Figure 2

Figure 2: Steps in the selection of a CBT system

Stage 1: Initial screening

Clearly the task of evaluating the large number of authoring packages in detail was impossible. Given the large number of possibilities it quickly became clear that an initial "paper sifting" was necessary.

A method of screening for "hardware compatibility" and "software readiness" was employed (Penter, 1983). Packages that failed to meet the essential criteria described below were given no further consideration. No attempt was made to rate these functions, only to establish whether the system had them or not.


  1. Must have a full screen editor, or ability to use an external full screen editor.
  2. Must support true/false, multiple choice and matching response answer analysis.
  3. Must be capable of supporting author determined branching.
  4. Must be capable of importing and manipulating external text and graphics files.
  5. Must interface with videodisc or videotape.
  6. Must operate in an MS DOS environment.
  7. Must support medium resolution colour graphics (CGA) as a minimum.
  8. Must support tracking of student progress.
  9. Must have Australian based technical and instructional support.

Stage 2: Instructional and technical assessment

After deciding what factors deserved inclusion a list of required hardware and software features was composed. A conscious effort was made not to delineate between hardware and software components in an effort to avoid becoming technically driven in the selection of criteria. Therefore software concerns predominated and hardware only become an issue when they affected software performance. While some features were struck from the list other more common ones remained (and rated as "Present but not required").

A section for a "qualitative response" was added. Naturally this could be criticised, quite reasonably, for introducing too subjective a criteria into the evaluation. This claim can be readily countered by the observation that many CBT system evaluations as selected using purely subjective evaluation methods (Kearsley 1983, my brother in law sells them, the vendor sponsored our associations conference or I like the brochure). As indefensible as such methods are the reality is that these "procedures" are adopted on occasion but there is room for highly subjective analyses - I like using it/don't like using it. After all an affinity with what learners find engaging should be part of the repartee of a CBT designer. Try explaining that to your brother in law!

Before this task could be undertaken a detailed evaluation of specific criteria was necessary. Rather than simply drawing up a shopping list of requirements an easy to use rating system was established. The opinions of colleagues using the software were sought when possible. However, because these opinions are subjective as well as second hand, so judicious weighting was given to this category.

Reviews of the software concerned are listed in the bibliography. The quality and quantity of these reviews varied but on balance they remain a valuable source of information and were therefore included in the rating considerations.

Only when opinion or review had a strong positive or negative evaluation was there a large variation in score given. All packages were given a basal rating of 50% of the total (ie. if the possible score was 50 points, 25 points were automatically allocated) in an attempt to correct for rated errors.

Two five point rating scales were employed. The "desirability" scale measured perceived learner and organisation features with the rating scale providing a method of evaluating the quality of particular features. The two scores were multiplied to give a raw weighted score. Hence the package with the highest score gets purchased (in theory at least).

5   =   Very important
4   =   Highly desirable
3   =   Desired
2   =   Desired but not required
1   =   Present but not required

5   =   100% match required
4   =   75% match required
3   =   50% match required
2   =   25% match required
1   =   0% match required

Qualitative Responses
Subjective analysis
Colleagues' evaluations
Literature comments

A sample of the assessment instrument

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
Interface with other software packages
Editor for unresolved branches
Ability to use an external full screen text editor
Control video disk
Control videotape

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
Use of mouse device for input and selection
Graphics comparable to CGA
Graphics comparable to EGA
Multiple fonts
Random test generation
Matching answer analysis
Student or author controlled branching
Easy back paging
Resume facility

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
Student registration
Security facility
Student resume
Student/Author communication
Student tracking

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
Vendor Support

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
On site author training
CBT based author training
Australian based technical support
Reliable client base

1  2  3  4  5
1  2  3  4  5
1  2  3  4  5
1  2  3  4  5

Stage 3: Costings

After rank ordering the potential CBT systems, it is necessary to overlay economic reality on the short list of desirable systems. There is no purpose in selecting the "best system" for instructional reasons only to find cost constraints forcing compromises - a situation many trainers will be familiar with.

The intention of the cost analysis is to determine the comparative costs of all normalised viable solutions. The analysis will identify variations in input for a common level of output. Systems not attaining this common level of output will be normalised with the costs involved in raising the solution to the required output level included in the costing for that solution.

This analysis will take into account not only initial capital costs, but recurring costs, based on:

The analysis period will be five years.

The results of this stage may cause a change in the rank ordering of the desirable systems.

Stage 4: Trials

This term is a misnomer as applied to this study. Trials in the accepted sense will not be conducted. Kearsley (1983 p. 145 - 147) outlines three major activities in conducting trials for a CBT system: development of a prototype, pilot tests and field tests.

The purpose of a prototype is to determine if the design of the system is appropriate before progressing. In this case, the purpose of the prototype is to validate the system characteristics and capabilities identified earlier. Thus, a prototype represents some small portion of the system, usually selected as exemplary of the full scale effort.

A pilot test would involve trying out the system with a small group of "students". Such "students" are typically colleagues or subject matter experts who are not directly involved in the development of the system. The purpose of a pilot is to detect any major problems in the hardware software, courseware, courseware development and project management process.

Figure 3

Figure 3: Trials (Kearsley, 1983)

The final stage of a trial is field testing in which the system is tried out in the actual training setting it is being designed for.

Staffing and time constraints will prevent much of these activities being undertaken. If time permits a limited trial will be carried out, provided suitable courseware is available.

Instead, reliance will be placed on discussions with organisations that are using the systems under consideration. Hopefully, these users have gone through a more formal process of trials as discussed by Kearsley, and that the results of the research can be applied to the organisation's need.


What stage are we at? What have we learned?

Although this study is incomplete, there are a number of observations that are worthy of mention.

  1. Seek a high degree of user involvement.

    Each participant must feel some degree of ownership, not only of the solution, but the problem and selection process as well. In this study, because of time constraints, training staff have been the user's representative. Consequently, a significant part of the implementation will be to market the solution to the users.

  2. Seek and maintain a high level of commitment towards the project.

    Management enquiries on slipped deadlines or delays suggests that completion of the project is of some importance. Conversely, lack of concern indicates that the project has lack of priority. At times it has been necessary to seek confirmation from management of their willingness to proceed. On each occasion however, the boundaries of resistance keep being pushed back slightly. Apathy has been vanquished!

  3. Get the necessary resources to carry out the project.

    Many CBT projects have failed for lack of necessary resources either: human, financial, equipment or facilities. Time has been the biggest enemy. This project has to be accommodated within the work priorities of the organisation and the authors.

    Within the organisation there are tasks of higher priority. Accordingly, the progression of this project has been on a part time basis. Whilst it would be desirable to have staff assigned on a full time basis, organisational considerations (internal and external) have precluded it in this case. On balance however, considerable progress has been, and can be, achieved with part time participation, providing the other conditions outlined are favourable.

  4. There must be a clear acknowledgment of a training need that can be met by CBT.

    Current experience with CBT within the organisation has convinced managers that CBT is a viable delivery mechanism. There is an enthusiasm that has to be channelled in the right direction. The identification of training opportunity or problems is paramount. It is too easy to slip into technical and equipment discussions on CBT and lose sight of, or not clearly define, the training problem being addressed.

  5. These must be clearly demonstrable results.

    Based on its previous experience with CBT, the organisation believes that there will be clear cut success in terms of improved job performance, less time off the job for training, reduced training costs for trainers and students as well as an increase in worker satisfaction with the type and timing of the training. While not undertaking an extensive cost benefit study prior to this project being initiated, management had sufficient confidence in CBT to proceed with a new CBT system evaluation.

Appendix 1: Definition

Computer Based Training is:
An interactive learning experience between a learner and a computer in which the computer provides the majority of the stimulus, the learner must respond, and the computer analyses the response and provides feedback to the learner. (Gery, 1987 p.6).

CAI is the direct use of the computer for the facilitation and certification of learning - that is, using the computer to make learning easier and more likely to occur (facilitation), as well as using the computer to create a record proving that learning has occurred (certification). (Burke, 1982 p. 16). [for CAI read CBT - ed.]

The extensive use of the computer in the development, delivery and administration of training (Jamieson, 1985).

Appendix 2: The Great Authoring Language or Source Code Debate

There are two general approaches to generating courseware for instructional applications of a computer (Barker and Singh, 1982). The first of these involves using a conventional programming language; the second depends upon the use of a high level, applications orientated author language. Some of the advantages of using the latter approach to courseware development include:
  1. greater author productivity
  2. freedom from the complexities of conventional programming languages
  3. the availability of power pattern matching primitives
  4. the provision of special facilities for generating educational dialogues
  5. specially designed student monitoring facilities, and
  6. facilities to enable the facile control of ancillary (multimedia) instructional resources.
    (Barker and Wilford, 1985 p. 115)
Over the years a large number of programming languages and systems have been developed for CBT. There are three major types of programming languages that can be used: general purpose languages, system specific author languages, and system independent author languages. General purpose languages such as BASIC, PASCAL or FORTRAN are designed for any type of programming applications. They are widely available and very flexible, but they are not specifically designed for instructional applications, it usually takes longer to program than with an author language that is designed for instructional programming. System specific author languages have been developed for a particular CBT system (eg. Tutor for PLATO, Coursewriter for IIS, TAL for TICCIT, or CAMIL for AIS). While these author languages take full advantage of the of CBT system they are part of, they will not work on any other system. On the other hand, system independent author languages such as PILOT (Starkweather 1969), PLANIT (Feingold 1968), or LOGO (Feurzeig and Lukas 1972) are available on a number of different systems. In order to be general enough to run on multiple systems, however, they often do not optimise the capabilities of any specific system

Examples AdvantagesDisadvantages

General Purpose LanguageBASIC
Very flexible
Widely available
Takes longer to program
System specific author languageTUTOR (PLATO)
Optimises capabilities of specific machinesWon't work on another system
System independent author languagePILOT
Run on many machinesMay not optimise capabilities of specific system

Figure 4: CBT Programming Languages

Author languages possess a number of features that make courseware development easier. This includes capabilities for display formatting, graphics creating, test scoring, answer matching, student data collection and report generation. All of these capabilities can be achieved using a general purpose language, but in an author language they are built in. (In fact, author languages are written in general purposes languages, usually an assembly language). (Kearsley, 1983, pp. 115, 116).

Appendix 3: First screening authoring languages considered initial screening

  1. Coursemaster
  2. Philvas
  3. Microtext
  4. Trainer
  5. Scholar Teach
  6. Unison
  7. WICAT
  8. CPAL
  9. Pilot/Superpilot (Apple)
 10. Mumedala
 11. National Author Language
 12. Tel
 13. G Pilot
 14. Adapt
 15. Tencore
 16. Tutor
 17. Accord
 18. Coursewriter/IIS
 19. Obie-1-Knobe
 20. Infowindows
 21. Author
 22. PASS
 23. Super Socrates
 24. Mandarin
 25. Quest Authoring System
 26. Allen Communications
 27. PIL-U
 28. PASS
 29. Adroit
 30. Peducator
 31. Shelley (co-resident)
 32. Phoenix
 33. SAM
 34. Scholar Teach 3
 36. Educator
 37. Access
 38. AIS PC
 39. ASAP
 40. Authology
 41. The Author
 42. GIGI
 43. WISE
 44. ICOM
 45. FORGE
 46. PC-Class
 47. IPS -Instructional Programming System
 48. CAN/8
 49. The Author Plus
 50. The Author Plus Color
 51. Avid
 52. Caste
 53. CAT
 54. CAT/S
 55. CDS/Genesis
 56. CLAS
 57. Cobra
 58. Concurrent Development Series
 59. Consulting Expert System
 60. Course of Action
 61. Coursewriter
 62. CSR Trainer - 4000
 63. Electronic Publishing system
 64. Exambank
 65. Exemplar
 66. The Grand Inquisitor
 67. Images III
 68. Infowindow
 69. Infowriter
 70. The Instructor
 71. Interact
 72. Interaudio
 73. KSS: Author
 74. Meastro
 75. McGraw-Hill Interactive Authoring System
 76. Microinstructor
 77. NCR Pilot II
 78 Oasys & Pilot Plus
 79. Of Course!
 80. PC/Pilot
 81. PC Train
 82. Phoenix-Micro
 83. Plato PCD1 Advanced Tutorial Model (ATM)
 84. Plato PCD1 Certification Testing Model(CTM)
 85. Plato PCD1 Drill and Practice Model (DPM)
 86. Plato PCD1 Graphic Simulation Model (GSM)
 87. Plato PCD1 Situation Simulation Model
 88. Plato PCD1 Tutorial Lesson Model (TLM)
 89. Plato PCD3
 90. Prophit Mentor
 91. Propi
 92. PTP/1 Professional Trainer's Package
 93. Quest
 94. R2-C
 95. Scrypt
 97. Taiga
 98. tbt Author
 99. TEL-Training and Education Language
100. Tencore Language Authoring System Tencore Assistant
101. Tencore Plus
102. Trainer Turned Author
103. Trillian Concurrent Authoring System
104. Unison Author Language
105. VAX Producer
106. Velosity 3D
107. Video Nova Authoring System
108. Wise
109. ACT I (Aural Comprehension Trainer I)
110. AIS II
111. Avid
112. CASTE (Course Assembly System and Tutorial Environment)
113. Classroom ACT(Aural Comprehension Trainer)
114. Electronic Publishing System
115. MicroTICCIT
116. SAS/AF Software
117. VAX Producer
118. VS Author
119. Omnisim
120. Phoenix
121. Micro Plato Author Language
122. SAS/AF Software
123. Scholar/Teach
124. Vax Producer
125. Accord
126. AIS II
127. Computer Assisted Self Learning
128. C-Pilot
129. Electronic Publishing
130. KSS: Author
131. Maestro
132. Unix Instructional Workbench
133. Adroit
134. Concurrent Development Series
135. Creative Coursewriter
136. Electronic Publishing System
137. Images III
138. Shelley
139. Teachers Aide
140. Trillian Concurrent Authoring System
141. Velosity 3D

Appendix 4: Authoring languages considered for instructional and technical assessment

Scholar Teach
PC Class
Trainer 4000
Micro Plato
PC Pilot
Interactive Authoring System


Anandam, K. & Kelly, D. (1981). GEM. Guided Exposure to Microcomputers: An Interactive Video Program. Miami, Fl: Miami-Dade Community College, p.3.

Barker, P. (1986). A Practical Introduction to Authoring for Computer Assisted Instruction Part 6: Interactive audio. British Journal of Educational Technology, 17(2). 10-128.

Barker, P. (1987). A Practical Introduction to Authoring for Computer Assisted Instruction Part 8: Multimedia CAL. British Journal of Educational Technology, 18(1), 6-40.

Barker, P. (1987). A Practical Introduction to Authoring for Computer Assisted Instruction Part 9: Database support. British Journal of Educational Technology, 18(2), 122-160.

Barker, P. & Ravinder, S. (1985). A Practical Introduction to Authoring for Computer Assisted Instruction Part 5: PHILVAS. British Journal of Educational Technology, 3(16), 218-236.

Barker, P. & Singh, R. (1983). A Practical Introduction to Authoring for Computer Assisted Instruction Part 2: PILOT. British Journal of Educational Technology, 14(3), 174-200.

Barker, P. & Singh, R. (1984). A Practical Introduction to Authoring for Computer Assisted Instruction Part 3: MICROTEXT. British Journal of Educational Technology, 2(15), 82-106.

Barker, P. & Skipper, T. (1986). A Practical Introduction to Authoring for Computer Assisted Instruction Part 7: Graphics Support. British Journal of Educational Technology, 17(3), 194- 212.

Barker, P. & Steele, J. (1983). A Practical Introduction to Authoring for Computer Assisted Instruction Part 1: IPS. British Journal of Educational Technology, 1(14), 26-45.

Barker, P. & Wilford, J. (1985). A Practical Introduction to Authoring for Computer Assisted Instruction Part 4: STAF. British Journal of Educational Technology, 2(16), 115-135.

Becker, S. (1987). How to Evaluate CBT Authoring Systems. Data Training, July pp.8-19.

Bork, A. (1984). Computers and the Future: Education. Computer Education, 8(1), 1-4.

Burke, R. (1982). CAI Sourcebook. Prentice-Hall, Inc. New Jersey.

Clark, D. R. (1984). The Role of the Videodisc in Education and Training. Media in Education and Development, December, pp. 190-192.

Data Training (1987). The 1987 CBT Guide: Reference Guide to Authoring Systems and Courseware Vendors. Weingarten Publications.

Davidove, E. (1987). Evaluation and Selection of Courseware Development Software. Educational Technology, pp. 34-37.

Dean, C. & Whitlock, Q. (1982). A Handbook of Computer Based Training. Kogan Page, London.

Freedman, R. (1984). Knowledge-Based Courseware Authoring. Training Technology Journal, 1(4), 4-9.

Gery, G. (1987). Making CBT Happen: Prescription for Successful Implementation of Computer-Based Training in Your Organisation. Boston: Weingarten Publications.

Hartley, J. R. (1981). Learner Initiatives in Computer-Assisted Learning. In U. Howe (Ed.), Microcomputers in Secondary Education. London: Kogan Page.

Hasselbring, T. (1984). Research on the Effectiveness of Computer-Based Instruction: A Review. Technical Report No. 84. 1. 3 (ED 262 754).

Hosie, P. (1987). Adopting Interactive Videodisc Technology for Education. Educational Technology, July, 1987, pp. 5-10.

Jamieson, D. (1985). Selecting a CBT System. Information Technology Month Seminar, Perth, 1985.

Jamieson, D. (1988). Trade-offs in the Selection of Computer Based Training Systems. AITD National Conference, Canberra, 1988.

Kearsley, G. (1983). Computer-Based Training: A Guide to Selection and Implementation. Addison-Wesley Publishing Company: Reading, Massachusetts, 1983.

Kozma, B . (1986). Present and Future Computer Courseware Authoring Systems. Educational Technology, pp. 39-41.

Kulick, J. et al. (1983). Effectiveness of Computer-Based College Teaching: A Meta-analysis of Findings. Review of Educational Research, 50, Winter, 525-44.

Penter, K. (1985). Wesrev Screening Checklist. WESREV, 1, pp. 3-8.

Salisbury, D., Richards, B. & Klein, J. (1985). Prescription for the Design of Practice Activities for Learning: An Integration from Instructional Design Theories. Florida State University and Hazeltine Corporation.

Salder, R. & Coleman, J. (1987). CBT Evaluation: Why, What, Who and How, The 1987 CBT Guide. Weingarten Publications Inc.

Sautter, B. (1987). Not Just for Training Anymore. Data Training, pp.20-21.

Wilson, L. (1987). Computer-Based Training Today: A Guide to Research, Specialised Terms and Publications in the Field. American Society for Training and Development, pp.41-53.

Wilson, L. (1987). What the Research Says about Computer Assisted Instruction. Computer Based Training Today, American Society for Training and Development, Alexandria.

Please cite as: Hosie, P. and Jamieson, D. (1988). Plenty of chaff - some wheat: Evaluating CBT authoring packages for industry and education. In J. Steele and J. G. Hedberg (Eds), Designing for Learning in Industry and Education, 208-223. Proceedings of EdTech'88. Canberra: AJET Publications.

[ EdTech'88 contents ] [ EdTech Confs ] [ ASET home ]
This URL:
© 1988 The authors and ASET. Last revised 9 May 2003. HTML editor: Roger Atkinson
Previous URL 10 May 1998 to 30 Sep 2002: