e-Journal of Instructional Science and Technology (e-JIST) Vol. 9 No. 2

Evaluating a Program to Increase Faculty Use of Technology in Teaching and Learning



Carmen M. Latterell, Ph.D.
University of Minnesota, Duluth
clattere@d.umn.edu

Linda Deneen
University of Minnesota, Duluth

Abstract

Higher education institutions are trying various methods to encourage faculty to use technology in their teaching and students’ learning. However, it is difficult to measure the effectiveness of these methods. This article describes both what one does not want to measure and what one should be measuring in this evaluation process. In addition, we give the evaluation results of our intervention program.

A growing body of literature is showing the potential of technology-enhanced courses to facilitate the learning process of undergraduate students (Berger, 1992; Cummings, 1996; Lee and Johnson, 1998). However, most undergraduate institutions are finding it difficult to enlist large numbers of faculty members in adopting technology-based solutions in their classrooms (Lee and Johnson, 1998). There are deficiencies in the technology skills of the faculty (Molenda and Sullivan, 2000) among other considerations, and unless support of faculty is provided, it is unlikely these deficiencies will be addressed (Danielson and Burton, 1999).

Our institution was not an exception to this. In 1997, 282 out of our 351 faculty responded to an internal survey with only 12% (34 of 282) of faculty stating they used web-based course materials in their teaching. About a quarter said they used the web "sometimes" while well over half reported never using such materials. These results seem similar to a 1998 national survey showing "about one-half of all college courses using E-mail, one-third requiring students to explore Internet resources, and one-fourth offering class materials and resources as web pages" (Molenda and Sullivan, 2000, p. 6). Another interesting note, however, is that "web usage is typically limited to basics such as the posting of course syllabi" (Molenda and Sullivan, 2000, p. 6).

We made two large responses to this problem. One was to submit a grant proposal; the evaluation of the resulting program is what we report on in this paper. The other was to implement our "Tech Camps." The first of these was held in March 1999. The camps lay the groundwork for designing and teaching courses using technological tools. They provide faculty participants with enough and varied experience to comfortably bring technology to bear on academic courses they teach. Each camp includes workshops and hands-on activities. Faculty are to produce course materials and delivery that are enhanced with technology and can serve as a showcase and model for others. Faculty receive $2500 to purchase a laptop computer, $500 for supporting software, and 20 hours of student assistance. Further information about our Tech Camps is available at http://www.d.umn.edu/itss/etrg/techcamp/. With the implementation of Tech Camp, the percent of faculty using technology in their teaching has increased.

To further address the need, the Bush Foundation (http://www.bushfoundation.org/) awarded a three-year grant to the four campuses of the University of Minnesota system beginning in July 2001. The University used the grant funding to provide faculty with special assistance to expand their knowledge and use of technology-enhanced learning. The technology enhancements include course web sites, technology-enhanced presentations, on-line discussions, web-based tools and services, and multimedia enhancements.

The program was competitive: faculty members applied by identifying a course to enhance with technology. Faculty targeted by the program identified themselves as "late-bloomers" in their technology skills. At the University of Minnesota Duluth, we named this cohort the "technophytes.” We intend to start a new cohort in each of the three years of the grant. Our first cohort consisted of 20 faculty. The technophyte web site is found at http://www.d.umn.edu/itss/etrg/technophytes/.

Technophytes receive a combination of training, resources, and support. Participants are expected to develop and implement a plan for the adaptation of course materials. By the end of the year, faculty must have delivered a course that is enhanced with technology and can serve students well. To accomplish the goals, each faculty member meets regularly with an assigned mentor who works with the faculty member on skill development as well as instructional design. The cohort also meets occasionally as a group to share ideas, frustrations, and encouragement.

Institutions often skip the evaluation step of new programs, but initiatives need to be evaluated (Lee and Johnson, 1998). Putting it succinctly "any teaching innovation must be formatively evaluated if it is to be optimized" (Laurillard, 1993, p. 247). The mere presence of a program does not ensure its success (Ehrmann, 1999). Of course whenever evaluation occurs, there are two overriding questions. What do we really want to measure? And then, how might one measure it? Before we answer these two questions, we want to address a related question. We first determined what we really did not want to measure in order to think about what we did want to measure.

What We are Not Measuring

It is tempting in these situations to think the question is: Do technology-enhanced courses offer a better learning experience? We tend to agree with scholars such as Ehrmann (1994; 1995) who consider that not only a wrong question, but rather useless. In our situation, it is the inappropriate question for two reasons. First, we most certainly do not have a controlled study. Second, we already believe in the benefits of technology-enhanced courses. Although there are many benefits, two alone convince us of their value. First, technology-enhanced courses offer the chance of including a wider spectrum of learning styles than those course without technology do (Roschelle, Pea, Hoadley, Gordin, and Means, 2000). Second, it is clear that this is a technological world, and if we are in the business of educating citizens to function in this world, they need to be comfortable with technology. In this sense then, we take any use of technology as gain because we believe that technologies create possibilities.

So, we do not want to measure whether our program increased student learning, per se. In addition, others have viewed similar situations as investments in educational technology and thus conducted a cost-analysis evaluation (Ehrmann, 1991; Fleit, 1994). They have looked at such things as improvements in the capabilities of students in degree programs and net financial consequences of program changes. These are important questions to ask and may eventually be the question of interest for our program. But, we did not find that approach compelling at this time, as there is a more immediate question.

We wanted to know if our program worked. Since the goal of the program was to help faculty to expand their knowledge and use of technology-enhanced learning, a reasonable thing to measure is the degree to which faculty expanded their knowledge and use of technology-enhanced learning.

Our research question then is how can we measure whether faculty expanded their use of technology-enhanced learning? We share in this article the particular plan that we used, in the hopes that other higher education institutions can use aspects of it as well. But, also, we offer this article as a demonstration that one can (and should) evaluate these types of programs.

What We Are Measuring

We believe that there are four broad areas that need to be assessed in order to answer our question: Did the faculty expand their use of technology-enhanced learning?

  • Was the implementation of the program successful?

  • Did the faculty change in their attitudes and perceived skill level?

  • Did the faculty change in their practices?

  • Did the students benefit?

The first area addresses whether it was the program itself that was effective. The second area addresses the idea that faculty often resist enhancing courses with technology due to a lack of willingness or confidence, and so their attitudes and perceptions of their skills are as important as their actual skills. The third question considers whether the faculty actually made substantive changes as a result of the program. And the final question acknowledges that indeed the bottom line of enhancing a course with technology is to benefit students. We have included all the major players: the program itself, the faculty and the students.

In the remainder of this article, we detail how we measured the answer to each of these questions. In addition, although it is not the purpose of this article to give our results, we do provide some results for the sake of the interested reader. It should also be noted that what our instruments measure are not mutually exclusive. In other words, the instrument we designed to answer one question may (and often does) contribute to the answer to another question. For ease of reporting, we leave our instruments separated, but the reader should note that we are not suggesting that our instruments match up one-for-one with our questions.

Was the implementation of the program successful?

Our issue here is whether we could have better implemented the program and/or if the program should have been implemented differently. We wanted this to be a straightforward measure of the level of satisfaction of our faculty with the program. See Table 1 for a list of Likert-scale items used along with the results for 181 of our members. We also asked faculty two open-ended items: "This program would work better for me if…" and "The things that are working well for me in this program are…" Thus, we simply asked the faculty at midpoint what had been working and what had not.

I strongly disagree.

I disagree.

I don't know.

I agree.

I strongly agree.

The program is facilitating my developing the technological skills that I wanted.

0

0

1

7

9

This program is having a positive impact on my students’ learning.

0

0

4

9

4

I am receiving the help that I need from this program.

0

0

0

3

14

I like the cohort aspect of this program.

0

1

5

6

5

The needed technological tools are available to me.

0

0

3

4

11

I am able to express my needs in regard to this program.

0

0

1

5

11

There is an appropriate level of accountability in this program.

0

0

4

10

3

There is enough structure in this program to keep me making progress toward my goals.

0

2

2

7

5

I am benefiting from this program.

0

0

0

6

10

I am finding success in implementing the technology enhancement that I wanted to make.

0

1

1

8

6

I am less able to make changes in my courses than I had wished.

3

8

4

1

0

Table 1: Tallies for the Evaluation of the Program Itself

It appears that overall the faculty believed the program worked. It seems that some faculty looked for more structure than others. Responses to the item regarding how the program would work better almost all expressed that the faculty wished they had more time. In regards to what worked well, there was strong agreement that having one-on-one help was key.

Did the faculty change in their attitudes and perceived skill level?

To measure the attitudes (and perceived skill changes) we again decided to ask faculty. This time, however, we did this in a pre- and post-manner. We used a survey we developed and named "Assessment of Faculty Attitudes and Perceived Skills in Technology Use in Teaching." Items were a combination of items taken from Profiler2 and items of our own creation. Our items follow.

Each statement below is answered with one of the following:

  1. I am currently not able.

  2. I am in the process of learning how.

  3. I am able but not efficient.

  4. I am able and efficiently do so as I see fit.

  1. I use technology tools to enhance student learning.

  2. I use technology tools to communicate information and ideas effectively to my students.

  3. I design technology-rich experiences and environments for effective learning.

  4. I identify technology tools based on the appropriateness to specific tasks and student needs.

  5. I implement technology-rich learning that supports content matter.

  6. I use technology to foster student development of high order thinking skills.

  7. I use technology to assess student learning.

  8. I model practices that demonstrate understanding of legal, ethical, cultural, health, and societal issues related to technology for my students.

  9. I create instructional materials involving the Internet.

  10. I assist students to routinely identify the appropriate technology tool for their activities.

  11. I encourage students to work on projects using technology and the Internet.

  12. I develop web-based collaborative student instructional activities.

  13. I use communication tools, such as email or chat, to enhance my courses for my students.

Each statement below is answered with one of the following:

  1. A. I strongly disagree.

  2. B. I disagree.

  3. C. I don't know.

  4. D. I agree.

  5. E. I strongly agree.

  1. I think integrating technology pulls too much focus away from course content.

  2. I am interested in learning ways to integrate technology into the courses that I teach.

  3. Using technology adds to my teaching responsibilities more than it benefits my students.

  4. I think integrating technology can enhance student learning.

  5. Using technology enhances the effectiveness of my teaching.

  6. I do not want to use technology in my courses.

  7. My department recognizes the use of innovative instructional technology in promotion/tenure and/or merit pay decisions.

We created the survey on-line and asked the faculty to take the survey with the help of their mentor. In this way, we demonstrated a technique for on-line testing that many participants had not seen before. We provided support for those who were fearful by having the mentor assist. And we modeled behavior for using technology in a new and powerful way.

Once the pre- and post-survey was taken, a difference score for each person was computed by subtracting the total post-score from the total pre-score. The totals were found by adding the points based on the following scheme.

  1. I am currently not able or I strongly disagree.

  2. I am in the process of learning how or I disagree.

  3. I don't know.

  4. I am able but not efficient or I agree.

  5. I am able and efficiently do so as I see fit or I strongly agree.

If the question was worded in the negative (number 14, 16, and 19) then the scoring was reversed. This resulted in 183 scores: 3, 7, 9, 9, 10, 11, 13, 14,15, 15, 19, 21, 22, 24, 26, 31, 32, 36, which are significantly different from 0. The higher the score, the more difference between the pre- and post-test scores in the positive direction. The highest a difference score could be is 80 and that would mean that the person went from completely not able and strongly disagreeing to completely able and efficient and strongly agreeing. This is not likely, since some of our items measured whether the person was interested in technology at all, and we assumed some interest or why would the person apply to get in the program? Anyway, the mean is 17.61 with a standard deviation of 9.37. These results show that the program resulted in attitudes that are more positive and increased sense of technology skills.

Did the faculty change in their practices?

This was measured in two ways: survey and interview. Both were done at the end of the year. The interview items follow.

  • Describe your technophyte involvement.

  • How has the program been successful?

  • How has it not?

  • Do you see an effect on your teaching?

  • Has your own attitude about using technology in teaching changed?

  • Has your skill level changed?

  • Have your practices changed?

  • Do you see an effect on students?

The interview data served to offer us some qualitative analysis at least in terms of comments. The interview data was very consistent with the other sources of data. It also gave us a strong sense that faculty were determined to change their practices. The technology enhancements were, in the minds of faculty, permanent changes to their teaching. When directly asked if this was so, faculty responded with definite “yes” answers. But, in addition, faculty were quick to describe additional enhancements that they wanted to implement in the future. We came away from the interviews with no doubt that practices had been changed.

Our survey questions follow.

  • Please identify what technology enhancements you actually made to your course(s).

  • Please identify any positive outcomes that you believe occurred or have evidence of having occurred due to the technology enhancement.

  • Please identify any negative outcomes that you believe occurred or have evidence of having occurred due to the technology enhancement.

These items were placed into a larger survey that was done on the system level, instead of the campus level. Results from this strongly showed a faculty who felt they had made long term changes. No one felt that they were unsuccessful at making permanent changes, although some identified lack of time as causing them to not make as many changes as they desired. Faculty unanimously named the mentor factor as being the number one reason faculty were able to make these changes.

Did the students benefit?

This we measured by a survey given with the regular evaluation forms for the courses during the last week of the semester. Faculty in our program were asked to give these to their students. See Table 2 for a listing of the Likert-scale items and some results. Each question includes results from more than one class.

N

Mean

S.D

Overall, the PowerPoint presentations enhanced my learning.

7

28

1

1.7

1

.72

I was able to take notes from the PowerPoint presentations.

7

28

.86

.57

1.07

1

I feel that the PowerPoint presentations were more effective than traditional lectures with overhead slides would have been.

7

28

1.14

1.25

1.07

.9

The PowerPoint slides were easy to see.

7

28

1.57

1.54

.53

.51

I liked having a course Web site for this class.

7

28

15

26

16

32

1.43

1.64

1.56

1.54

1.63

1.47

.30

.62

.51

.65

.81

.76

I accessed the course Web site for this class whenever I wanted information.

7

15

26

16

32

1.29

1.67

1.27

1.56

1.63

1.11

.62

.87

.51

.55

The course Web site was easy to navigate.

7

15

26

16

32

1.57

1.6

1.38

1.5

1.69

.53

.51

.7

.52

.47

The course Web site contained valuable information.

7

15

26

16

32

1.43

1.53

1.35

1.5

1.53

.53

.52

.69

.52

.62

I liked having handouts available on the course Web site.

7

28

16

32

1.71

1.79

.81

1.38

.49

.42

.91

.66

I would prefer to have a course that is less dependent on my having to access the Web.

7

15

26

16

32

-.14

-.93

-.54

-.31

.09

1.46

.73

1.14

1.01

1.28

Overall, the course Web site contributed to my learning.

7

15

26

16

32

1

.8

.31

.63

.81

.58

.68

.84

1.02

.69

I was comfortable with the level of technology required in this course.

7

27

15

26

16

32

1.43

1.36

1.47

1.38

.94

.84

.53

.49

.52

.7

1.06

.95

The technology used effectively enhanced this course.

7

27

15

26

16

32

1.43

1.04

1.2

.31

.88

.69

.53

.64

.68

.84

.81

.86

The needed technological tools were available to me during the course.

7

15

26

16

32

1.43

1.2

.88

1

.91

.53

.56

.91

.89

.64

The technological aspects of this course were an important part of my learning.

7

15

26

16

32

.71

.67

-.04

.75

.09

.95

.72

.96

1.06

1.15

The on-line discussion part of this course contributed to my learning.

15

16

32

1.27

.13

-.09

.46

.96

1.03

I was comfortable having Web discussions as a part of this course.

15

16

32

1.47

.81

.5

.64

.75

1.14

I shared more information through the on-line discussions than I would do in a class discussion.

15

16

32

1.13

-.06

-.06

1.26

1.29

1.27

I think the PRS technology (hand-held devices used for review) has good possibilities for in-class discussion and learning.

28

1.29

.71

I would be comfortable using the PRS technology system for taking quizzes and exams.

28

.11

.96

I liked turning assignments in electronically.

16

32

.06

.59

1.24

1.07

Table 2: Student Results

Key:

2 is I strongly agree

1 is I agree

0 is I don't know

-1 is I disagree

–2 is I strongly disagree

The student survey was designed to assess the impact of the program on students in classes taught by the technophytes. Technophytes were encouraged to pick and choose from the list of questions to match what they were doing in their classes. The student surveys were voluntary and not completed for all technophyte courses, so results were sparse. For the most part, however, students evaluated the technology enhancements positively.

Conclusion

In response to a lack of faculty use of technology in teaching and learning, our university implemented Tech Camps and a technophyte program. Programs need to be evaluated. This paper has described our evaluation process of the technophyte program.

We saw four areas to evaluate: the implementation of the program, the change in faculty attitudes and perceived skill, the changes in faculty practices, and the benefit that the students received. We offer our evaluation instruments for each of these areas (although they are not mutually exclusive) for any other institution to use as desired.

We feel that our evaluation process was successful, and in addition that the results reveal a successful program. Again, we offer this to the reader to suggest that one might feel free to adapt parts of our assessments for one's own.

References

Berger, C. (1992) Ann Jackson and the four methods of integrating technology into teaching. Syllabus, 21, 2-4.

Cummings, L. E. (1996). Educational technology: A faculty resistance view. Part II: Challenges of resources, technology and traditional. Educational Technology Review, 5, 18-20, 30.

Danielson, J. A., & Burton, J. K. (1999). A support system for instructional technology in higher education: The housecalls program of Virginia Tech's College of Human Resources and Education. Educational Media and Technology Yearbook, 24, 51-56.

Ehrmann, S. C. (1991). Gauging the educational value of a college's investments in technology. EDUCOM Review, 26, 3-4.

Ehrmann, S. C. (1994). Making sense of technology: A dean's progress. Change, 26(2), 34-38.

Ehrmann, S. C. (1995). Asking the right questions: What does research tell us about technology and higher learning? Change, 27(2), 20-27.

Ehrmann, S. C. (1999). Asking the hard questions about technology use and education. Journal of Family and Consumer Sciences, 91(3), 31-35.

Fleit, L. H. (1994). Self-assessment for campus information technology services (CAUSE Professional Paper Services, #12, Pub. 3012). Boulder, CO: Cause.

Laurillard, D. (1993). Rethinking university teaching: A framework for the effective use of educational technology. London and New York: Routledge.

Lee, J. R., & Johnson, C. (1998). Helping higher education faculty clear instructional technology hurdles. Educational Technology Review, 10, 13-17.

Molenda, M., & Sullivan, M. (2000). Issues and trends in instructional technology. Educational Media and Technology Yearbook, 25, 3-13.

Roschelle, J. M., Pea, R. D., Hoadley, C. M., Gordin, D. N., & Means, B. M. (2000). Changing how and what children learn in school with computer-based technologies. The Future of Children, 10 (2), 76-101.


1 Some of our members only answered some of the questions. In addition, two members did not fill out the survey at all.
2 Profiler is an on-line assessment tool designed to allow higher-education organizations to build a survey and then have their faculty take the survey on-line. Profiler can be found at http://www.profiler.com.
3 Two faculty members did not complete the end survey. One of these had changed universities.
© University ofSouthern Queensland