Choosing a Computer Based Training (CBT) authoring language is a daunting task. With in excess of one hundred major packages on offer for microcomputers, errors are costly in terms of scarce capital, opportunities foregone, and human resources wasted. Selection criteria must be matched with learner needs and training requirements of an organisation and not be technology driven. This paper will trace the process involved in selecting a package for a large government statutory authority. The selection process documented will assist other large government and private organisations rapidly select a suitable CBT authoring system.
Perhaps the most logical starting point for this paper is to clarify what will not be included. Conference papers about CBT tend to routinely consider "central issues" of concern. While there is no denying the importance of such concerns there is a case for not waiting for these issues to be satisfactorily resolved before attempting to select a Computer Based Training (CBT) authoring system.
For the purpose of this paper it is taken as established that authoring languages for CBT represent, at present, the only realistic way of productively developing the majority of CBT applications. Some will quibble with this assertion but this debate can be taken up in another forum. If you're not quite sure what a CBT authoring system or language is then Appendix 1 should offer some joy. No attempt will be made to argue the pros and cons of using source code as opposed to authoring languages for developing CBT. A summary of these issues is provided in Appendix 2 for those unfamiliar with this debate. For the novice it is worth mentioning that we are hunting for a complete authoring system not simply a computer language.
At a more fundamental level there will be no attempt to justify the use of CBT in comparison to traditional delivery methods. However, there is sufficient instructional, economic, technical and organisational indicators to suggest a reasonable success rate for CBT. Suffice to say that CBT works when it is well designed and used in the appropriate context. Meta-evaluations of recent and established research roundly support this assertion (Wilson, 1987).
Consideration of important learning factors such as: interactivity, colour, motivation, remediation, text and graphics, course completion rates, feedback and so forth will be discussed when they directly impinge on the software or hardware being considered. No attempt has been made to present a thesis apart from restating what should be apparent - learner requirements should be of primary concern when selecting a CBT authoring system.
Thus all the pressures of selecting a system with inadequate time, human resources, equivocal support for implementing CBT by management and the need to incorporate prior technology commitments in a large (5000+ employees) government statutory authority will be evident. Hopefully the results will be of value, although probably not directly transferable, to other medium to large government organisations as well as private concerns.
We are aware that no CBT authoring system could be all things to all people. There was no task compelling enough to warrant the selection of a dedicated system with specialist capabilities.
Actually the initial rationale for selecting a new CBT system was primarily to upgrade the existing CBT machine and software which was acquired before the current generation of microcomputer based CBT authoring systems were widely available - five years ago.
In the wash-up we settled for a re-examination of the initial reasons for introducing CBT.
Of more consequence, however, are the pragmatics of how this aids a system wide adoption strategy. MS DOS is the most widely used microcomputer standard and there is a considerable amount of software available off the shelf.
Conducting training for new employees continues to lead to poor utilisation of human resources and could be more appropriately delivered using CBT thus freeing training staff to develop specialised courses not suitable for delivery using CBT. In effect a "teacher substitute" was required for certain well defined tasks.
Like many other government departments this organisation is in the advanced stages of being functionally re-organised. In a climate of uncertainty it is unrealistic to expect people to be enamoured of joint ventures when the long term pay offs are uncertain. Regardless of these socio-political impediments the diverse nature of the work undertaken mitigated against selecting a CBT authoring system that would satisfy everyone's requirements. As Gery (1987) observes it is not unusual for large organisations to select a number of radically different (and incompatible) CBT systems to suit the various types of training being undertaken.
As Kearsley (1983, p.14) observes computers have certain advantages in instructional settings, such as: permitting "students to learn at their own pace, individual learning styles are considered, resulting in increased student satisfaction. Most importantly, there is more control over learning materials and learning process".
Bork (1984) agrees that computers are going to be an important factor in all human learning because they make learning truly interactive for large numbers of learners on a cost efficient basis. Bork (1984, p. 3) also observes that"... Evidence suggests (Hartley, 1981) that student control over learner strategy is the most efficient approach to CAL design. Encouraging individual routes through information will assist students to become more actively involved in the learning process". Learners "should be given as much control as possible over programs, or at least opportunities for regaining control at some stage of the instructional sequence". (Hosie, 1987, p. 7).
Few educationists should need to be reminded that learning is not a passive process. Understanding and knowledge involve active processing rather than passive reception.
A consistent claim made in favour of using IV is that "it changes the student from passive observer to active participant" (Anandam and Kelly, 1981, p.3). The ineffectiveness of modern media as a learning device (in comparison with the written text) is due to the lack of opportunity for interrogation, and allied to this observation is the criticism that the learner loses control over the pace of instruction (Clark, 1984).
Figure 1: Growth in CBT systems and courseware vendors
According to Data Training (p.5) "It isn't just the number of systems that keeps growing either, it's their capabilities... the variety of features and options is fast outstripping our ability to report on them succinctly". If the experts can't keep pace what hope is there for neophytes to the game?
A publications like Data Training "A Resource Guide to Computer Aided Instruction" is of inestimable value for initially sorting which system/s to choose. But such a publication does, by its own admission, have shortcomings. There is no attempt to rate the quality of the features mentioned, only their presence or absence - essentially a binary analysis. Despite this limitation such information saves considerable brain and legwork - permitting unsuitable products to be eliminated on the initial screening
Figure 2: Steps in the selection of a CBT system
A method of screening for "hardware compatibility" and "software readiness" was employed (Penter, 1983). Packages that failed to meet the essential criteria described below were given no further consideration. No attempt was made to rate these functions, only to establish whether the system had them or not.
A section for a "qualitative response" was added. Naturally this could be criticised, quite reasonably, for introducing too subjective a criteria into the evaluation. This claim can be readily countered by the observation that many CBT system evaluations as selected using purely subjective evaluation methods (Kearsley 1983, my brother in law sells them, the vendor sponsored our associations conference or I like the brochure). As indefensible as such methods are the reality is that these "procedures" are adopted on occasion but there is room for highly subjective analyses - I like using it/don't like using it. After all an affinity with what learners find engaging should be part of the repartee of a CBT designer. Try explaining that to your brother in law!
Before this task could be undertaken a detailed evaluation of specific criteria was necessary. Rather than simply drawing up a shopping list of requirements an easy to use rating system was established. The opinions of colleagues using the software were sought when possible. However, because these opinions are subjective as well as second hand, so judicious weighting was given to this category.
Reviews of the software concerned are listed in the bibliography. The quality and quantity of these reviews varied but on balance they remain a valuable source of information and were therefore included in the rating considerations.
Only when opinion or review had a strong positive or negative evaluation was there a large variation in score given. All packages were given a basal rating of 50% of the total (ie. if the possible score was 50 points, 25 points were automatically allocated) in an attempt to correct for rated errors.
Two five point rating scales were employed. The "desirability" scale measured perceived learner and organisation features with the rating scale providing a method of evaluating the quality of particular features. The two scores were multiplied to give a raw weighted score. Hence the package with the highest score gets purchased (in theory at least).
DesirabilityA sample of the assessment instrument
5 = Very important
4 = Highly desirable
3 = Desired
2 = Desired but not required
1 = Present but not requiredRating
5 = 100% match required
4 = 75% match required
3 = 50% match required
2 = 25% match required
1 = 0% match required
Qualitative Responses 1)
2)
3)Subjective analysis
Colleagues' evaluations
Literature comments50
20
30+
100points
points
points
Authoring Desirability |
Rating | |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Interface with other software packages Editor for unresolved branches Ability to use an external full screen text editor Control video disk Control videotape |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Presentation Desirability |
| Rating |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Use of mouse device for input and selection Graphics comparable to CGA Graphics comparable to EGA Multiple fonts Random test generation Matching answer analysis Student or author controlled branching Easy back paging Resume facility |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Management Desirability |
Rating | |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Student registration Security facility Student resume Student/Author communication Student tracking |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
Vendor Support Desirability |
Rating | |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
On site author training CBT based author training Australian based technical support Reliable client base |
1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 |
The intention of the cost analysis is to determine the comparative costs of all normalised viable solutions. The analysis will identify variations in input for a common level of output. Systems not attaining this common level of output will be normalised with the costs involved in raising the solution to the required output level included in the costing for that solution.
This analysis will take into account not only initial capital costs, but recurring costs, based on:
The results of this stage may cause a change in the rank ordering of the desirable systems.
The purpose of a prototype is to determine if the design of the system is appropriate before progressing. In this case, the purpose of the prototype is to validate the system characteristics and capabilities identified earlier. Thus, a prototype represents some small portion of the system, usually selected as exemplary of the full scale effort.
A pilot test would involve trying out the system with a small group of "students". Such "students" are typically colleagues or subject matter experts who are not directly involved in the development of the system. The purpose of a pilot is to detect any major problems in the hardware software, courseware, courseware development and project management process.
Figure 3: Trials (Kearsley, 1983)
The final stage of a trial is field testing in which the system is tried out in the actual training setting it is being designed for.
Staffing and time constraints will prevent much of these activities being undertaken. If time permits a limited trial will be carried out, provided suitable courseware is available.
Instead, reliance will be placed on discussions with organisations that are using the systems under consideration. Hopefully, these users have gone through a more formal process of trials as discussed by Kearsley, and that the results of the research can be applied to the organisation's need.
Although this study is incomplete, there are a number of observations that are worthy of mention.
Each participant must feel some degree of ownership, not only of the solution, but the problem and selection process as well. In this study, because of time constraints, training staff have been the user's representative. Consequently, a significant part of the implementation will be to market the solution to the users.
Management enquiries on slipped deadlines or delays suggests that completion of the project is of some importance. Conversely, lack of concern indicates that the project has lack of priority. At times it has been necessary to seek confirmation from management of their willingness to proceed. On each occasion however, the boundaries of resistance keep being pushed back slightly. Apathy has been vanquished!
Many CBT projects have failed for lack of necessary resources either: human, financial, equipment or facilities. Time has been the biggest enemy. This project has to be accommodated within the work priorities of the organisation and the authors.
Within the organisation there are tasks of higher priority. Accordingly, the progression of this project has been on a part time basis. Whilst it would be desirable to have staff assigned on a full time basis, organisational considerations (internal and external) have precluded it in this case. On balance however, considerable progress has been, and can be, achieved with part time participation, providing the other conditions outlined are favourable.
Current experience with CBT within the organisation has convinced managers that CBT is a viable delivery mechanism. There is an enthusiasm that has to be channelled in the right direction. The identification of training opportunity or problems is paramount. It is too easy to slip into technical and equipment discussions on CBT and lose sight of, or not clearly define, the training problem being addressed.
Based on its previous experience with CBT, the organisation believes that there will be clear cut success in terms of improved job performance, less time off the job for training, reduced training costs for trainers and students as well as an increase in worker satisfaction with the type and timing of the training. While not undertaking an extensive cost benefit study prior to this project being initiated, management had sufficient confidence in CBT to proceed with a new CBT system evaluation.
An interactive learning experience between a learner and a computer in which the computer provides the majority of the stimulus, the learner must respond, and the computer analyses the response and provides feedback to the learner. (Gery, 1987 p.6).CAI is the direct use of the computer for the facilitation and certification of learning - that is, using the computer to make learning easier and more likely to occur (facilitation), as well as using the computer to create a record proving that learning has occurred (certification). (Burke, 1982 p. 16). [for CAI read CBT - ed.]
The extensive use of the computer in the development, delivery and administration of training (Jamieson, 1985).
Examples | Advantages | Disadvantages | |
General Purpose Language | BASIC PASCAL FORTRAN | Very flexible Widely available | Takes longer to program |
System specific author language | TUTOR (PLATO) COURSEWRITER (IIS) CAMIL (AIS) | Optimises capabilities of specific machines | Won't work on another system |
System independent author language | PILOT PLANIT LOGO | Run on many machines | May not optimise capabilities of specific system |
Author languages possess a number of features that make courseware development easier. This includes capabilities for display formatting, graphics creating, test scoring, answer matching, student data collection and report generation. All of these capabilities can be achieved using a general purpose language, but in an author language they are built in. (In fact, author languages are written in general purposes languages, usually an assembly language). (Kearsley, 1983, pp. 115, 116).
1. Coursemaster 2. Philvas 3. Microtext 4. Trainer 5. Scholar Teach 6. Unison 7. WICAT 8. CPAL 9. Pilot/Superpilot (Apple)  10. Mumedala  11. National Author Language  12. Tel  13. G Pilot  14. Adapt  15. Tencore  16. Tutor  17. Accord  18. Coursewriter/IIS  19. Obie-1-Knobe  20. Infowindows  21. Author  22. PASS  23. Super Socrates  24. Mandarin  25. Quest Authoring System  26. Allen Communications  27. PIL-U  28. PASS  29. Adroit  30. Peducator  31. Shelley (co-resident)  32. Phoenix  33. SAM  34. Scholar Teach 3  35. TICCIT  36. Educator  37. Access  38. AIS PC  39. ASAP  40. Authology  41. The Author  42. GIGI  43. WISE  44. ICOM  45. FORGE  46. PC-Class  47. IPS -Instructional Programming System  48. CAN/8  49. The Author Plus  50. The Author Plus Color  51. Avid  52. Caste  53. CAT  54. CAT/S  55. CDS/Genesis  56. CLAS  57. Cobra  58. Concurrent Development Series  59. Consulting Expert System  60. Course of Action  61. Coursewriter  62. CSR Trainer - 4000  63. Electronic Publishing system  64. Exambank  65. Exemplar  66. The Grand Inquisitor  67. Images III  68. Infowindow  69. Infowriter  70. The Instructor |
 71. Interact  72. Interaudio  73. KSS: Author  74. Meastro  75. McGraw-Hill Interactive Authoring System  76. Microinstructor  77. NCR Pilot II  78 Oasys & Pilot Plus  79. Of Course!  80. PC/Pilot  81. PC Train  82. Phoenix-Micro  83. Plato PCD1 Advanced Tutorial Model (ATM)  84. Plato PCD1 Certification Testing Model(CTM)  85. Plato PCD1 Drill and Practice Model (DPM)  86. Plato PCD1 Graphic Simulation Model (GSM)  87. Plato PCD1 Situation Simulation Model  88. Plato PCD1 Tutorial Lesson Model (TLM)  89. Plato PCD3  90. Prophit Mentor  91. Propi  92. PTP/1 Professional Trainer's Package  93. Quest  94. R2-C  95. Scrypt  96. SUMMIT  97. Taiga  98. tbt Author  99. TEL-Training and Education Language 100. Tencore Language Authoring System Tencore Assistant 101. Tencore Plus 102. Trainer Turned Author 103. Trillian Concurrent Authoring System 104. Unison Author Language 105. VAX Producer 106. Velosity 3D 107. Video Nova Authoring System 108. Wise 109. ACT I (Aural Comprehension Trainer I) 110. AIS II 111. Avid 112. CASTE (Course Assembly System and Tutorial Environment) 113. Classroom ACT(Aural Comprehension Trainer) 114. Electronic Publishing System 115. MicroTICCIT 116. SAS/AF Software 117. VAX Producer 118. VS Author 119. Omnisim 120. Phoenix 121. Micro Plato Author Language 122. SAS/AF Software 123. Scholar/Teach 124. Vax Producer 125. Accord 126. AIS II 127. Computer Assisted Self Learning 128. C-Pilot 129. Electronic Publishing 130. KSS: Author 131. Maestro 132. Unix Instructional Workbench 133. Adroit 134. Concurrent Development Series 135. Creative Coursewriter 136. Electronic Publishing System 137. Images III 138. Shelley 139. Teachers Aide 140. Trillian Concurrent Authoring System 141. Velosity 3D |
Barker, P. (1986). A Practical Introduction to Authoring for Computer Assisted Instruction Part 6: Interactive audio. British Journal of Educational Technology, 17(2). 10-128.
Barker, P. (1987). A Practical Introduction to Authoring for Computer Assisted Instruction Part 8: Multimedia CAL. British Journal of Educational Technology, 18(1), 6-40.
Barker, P. (1987). A Practical Introduction to Authoring for Computer Assisted Instruction Part 9: Database support. British Journal of Educational Technology, 18(2), 122-160.
Barker, P. & Ravinder, S. (1985). A Practical Introduction to Authoring for Computer Assisted Instruction Part 5: PHILVAS. British Journal of Educational Technology, 3(16), 218-236.
Barker, P. & Singh, R. (1983). A Practical Introduction to Authoring for Computer Assisted Instruction Part 2: PILOT. British Journal of Educational Technology, 14(3), 174-200.
Barker, P. & Singh, R. (1984). A Practical Introduction to Authoring for Computer Assisted Instruction Part 3: MICROTEXT. British Journal of Educational Technology, 2(15), 82-106.
Barker, P. & Skipper, T. (1986). A Practical Introduction to Authoring for Computer Assisted Instruction Part 7: Graphics Support. British Journal of Educational Technology, 17(3), 194- 212.
Barker, P. & Steele, J. (1983). A Practical Introduction to Authoring for Computer Assisted Instruction Part 1: IPS. British Journal of Educational Technology, 1(14), 26-45.
Barker, P. & Wilford, J. (1985). A Practical Introduction to Authoring for Computer Assisted Instruction Part 4: STAF. British Journal of Educational Technology, 2(16), 115-135.
Becker, S. (1987). How to Evaluate CBT Authoring Systems. Data Training, July pp.8-19.
Bork, A. (1984). Computers and the Future: Education. Computer Education, 8(1), 1-4.
Burke, R. (1982). CAI Sourcebook. Prentice-Hall, Inc. New Jersey.
Clark, D. R. (1984). The Role of the Videodisc in Education and Training. Media in Education and Development, December, pp. 190-192.
Data Training (1987). The 1987 CBT Guide: Reference Guide to Authoring Systems and Courseware Vendors. Weingarten Publications.
Davidove, E. (1987). Evaluation and Selection of Courseware Development Software. Educational Technology, pp. 34-37.
Dean, C. & Whitlock, Q. (1982). A Handbook of Computer Based Training. Kogan Page, London.
Freedman, R. (1984). Knowledge-Based Courseware Authoring. Training Technology Journal, 1(4), 4-9.
Gery, G. (1987). Making CBT Happen: Prescription for Successful Implementation of Computer-Based Training in Your Organisation. Boston: Weingarten Publications.
Hartley, J. R. (1981). Learner Initiatives in Computer-Assisted Learning. In U. Howe (Ed.), Microcomputers in Secondary Education. London: Kogan Page.
Hasselbring, T. (1984). Research on the Effectiveness of Computer-Based Instruction: A Review. Technical Report No. 84. 1. 3 (ED 262 754).
Hosie, P. (1987). Adopting Interactive Videodisc Technology for Education. Educational Technology, July, 1987, pp. 5-10.
Jamieson, D. (1985). Selecting a CBT System. Information Technology Month Seminar, Perth, 1985.
Jamieson, D. (1988). Trade-offs in the Selection of Computer Based Training Systems. AITD National Conference, Canberra, 1988.
Kearsley, G. (1983). Computer-Based Training: A Guide to Selection and Implementation. Addison-Wesley Publishing Company: Reading, Massachusetts, 1983.
Kozma, B . (1986). Present and Future Computer Courseware Authoring Systems. Educational Technology, pp. 39-41.
Kulick, J. et al. (1983). Effectiveness of Computer-Based College Teaching: A Meta-analysis of Findings. Review of Educational Research, 50, Winter, 525-44.
Penter, K. (1985). Wesrev Screening Checklist. WESREV, 1, pp. 3-8.
Salisbury, D., Richards, B. & Klein, J. (1985). Prescription for the Design of Practice Activities for Learning: An Integration from Instructional Design Theories. Florida State University and Hazeltine Corporation.
Salder, R. & Coleman, J. (1987). CBT Evaluation: Why, What, Who and How, The 1987 CBT Guide. Weingarten Publications Inc.
Sautter, B. (1987). Not Just for Training Anymore. Data Training, pp.20-21.
Wilson, L. (1987). Computer-Based Training Today: A Guide to Research, Specialised Terms and Publications in the Field. American Society for Training and Development, pp.41-53.
Wilson, L. (1987). What the Research Says about Computer Assisted Instruction. Computer Based Training Today, American Society for Training and Development, Alexandria.
Please cite as: Hosie, P. and Jamieson, D. (1988). Plenty of chaff - some wheat: Evaluating CBT authoring packages for industry and education. In J. Steele and J. G. Hedberg (Eds), Designing for Learning in Industry and Education, 208-223. Proceedings of EdTech'88. Canberra: AJET Publications. http://www.aset.org.au/confs/edtech88/hosie.html |