4faculty logo
4faculty logo
Search

 

 

 

FACULTY DEVELOPMENT AND STUDENT SUCCESS EVALUATION

 

INTRODUCTION

The Institute of Applied Research and Policy Analysis (IAR) at California State University, San Bernardino will conduct a three year evaluation of the Faculty Development and Student Success Project.We understand that this project is a collaboration among nine local community colleges (hereafter referred to as RCC and partners), which includes Riverside Community College District, College of the Desert, Diablo Valley College, Orange Coast College, Pasadena City College, Rio Hondo College, San Diego Community College District, Santa Barbara City College, and Santa Monica College. The goal of the project is to improve the quality of teaching of first time adjunct faculty at each of these nine colleges by providing an on-line course with follow-up face-to-face workshops, which will train new adjunct faculty in teaching, state education code issues, and college policies. It is anticipated that this program will better prepare adjunct instructors for their teaching experience, and this will, in-turn, create a more positive and rewarding experience for their students.  As specified in the grant application, the expected outcomes for this project include: improved first impressions, enhanced teaching, higher retention rates, and greater student success. An evaluation of the program is essential in order to provide RCC and partners with critical data related to the effectiveness of the project. Specifically, the evaluation has the following components:

 

1) A Summative Evaluation, which will assess the effectiveness of the project as it relates to:
         a.  communicating the information to adjunct faculty through on-line courses;
         b.  raising faculty’s confidence related to teaching;

         c.  enhancing faculty’s sense of connectedness to their college;
         d.  impacting the design and delivery of instruction, and;
         e.  impacting student learning.

2)      A Formative Evaluation, comprised of focus groups of participating faculty, which will utilize the results from the summative evaluation and provide RCC and partners with suggestions and recommendations for improving the course materials, and the overall implementation and impact of the program.

 

It is further understood that IAR will submit annual interim reports to RCC and partners at the conclusion of years one and two, and a comprehensive final report at the end of year three, which will include:

1)      an executive summary of major findings and policy recommendations;

2)      a full data display;

3)      a detailed discussion of the methodology and statistical techniques employed,

4)      a complete analysis and interpretation of the findings, both summative and formative, which will address the effectiveness of the project in the areas discussed above, and suggestions and recommendations for enhancing and improving the program.

 

The following sections of this proposal will further detail our conceptualization of the project and address relevant methodological issues.

 

SUMMATIVE EVALUATION

Methodology

 

Cohort One   

In order to assess the overall impact and effectiveness of this program on both faculty and students as it relates to the outcomes described above (improved first impressions of the instructor, enhanced teaching, increased retention rates, and greater student success), IAR will conduct a pre/post assessment. Specifically, IAR will take a census of classes (roughly 200 across all nine colleges) which are being taught by newly hired adjunct faculty in the Spring 2001 semester at all nine participating colleges. Within each of these classes, the instructor will be surveyed within the first three weeks of the semester to assess their self-confidence as it relates to teaching and their sense of connectedness to the college. All students in these classes will also be surveyed at this time, so as to capture the majority of students before withdrawal from the course. This survey will be designed to measure their initial impressions and satisfaction with the instructor and the course.

            At the end of the Spring 2001 semester, students will again be surveyed to measure changes (if any) in their satisfaction with the course and the instructor. In addition, they will be asked if their impression of the instructor changed over the course of the semester and how much they learned from the course. Finally, student failure and withdrawal rates (supplied by each participating colleges' Department of Institutional Research) from these selected courses will be analyzed.

            Throughout the on-line course, participating faculty will be asked a variety of questions designed to test their understanding and assimilation of the material. Further, following the completion of the on-line course (approximately November, 2001), they will also be asked to evaluate the degree to which they feel the course was helpful to them and how (if at all) it will change the way they teach the course in the future. Finally, they will be asked to submit a copy of their Spring 2001 syllabus for future analysis.

During the Fall 2001 semester, IAR will conduct a follow up with these same instructors (teaching the same classes).  For purposes of analysis, classes will be broken down into two groups, an experimental group consisting of classes taught by instructors who took the on-line course, and a control group consisting of classes taught by instructors who did not take the on-line course. During the first three weeks of instruction, both faculty and students will be given the same survey as was administered during the beginning of the Spring 2001 semester. The faculty survey will have some additional questions for instructors who participated in the on-line course, asking them to provide examples of ways in which they formulated or revised their course content based on these on-line courses, evaluate the extent to which the on-line course influenced the design and delivery of their course, and assess the impact the modified design and delivery had on student learning. In addition, instructors in the experimental group will be asked to submit their Fall 2001 syllabus to IAR, and a content analysis of both Spring and Fall syllabi will be conducted to evaluate changes in course content between pre and post training.

            As with the Spring 2001 semester, students will be surveyed at the end of the Fall 2001 semester to measure changes (if any) in their satisfaction with the course and the instructor over time, and how much they learned from the course. This data will be compared to data gathered at the end of Spring 2001 and will also be compared between experimental and control groups. Data regarding student failure and withdrawal rates for Fall 2001 will also be submitted by all participating colleges and comparisons will be made from Spring 2001 to Fall 2001, and between experimental and control groups.

 

Cohorts Two through Six

            Cohorts Two through Six will consist of newly hired faculty for each semester of the evaluation period, beginning in the Fall 2001. Since beginning with Cohort Two, newly hired adjunct faculty will be able to take the course prior to teaching their first semester at the college, there will be no student pre-tests for these cohorts.   Thus, IAR will have to modify the approach described above for Cohort One.  Specifically, IAR will use a post-test design with control group.  Essentially this design involves comparing Cohorts Two through Six with Cohort One serving as pre-test control group for all subsequent cohorts.  In addition, for each of the Cohorts Two through Six, IAR will compare the “post” results of those who have taken the on-line course with those who have not. 

More specifically, during the first three weeks of instruction in their first semester of teaching, student surveys will be administered to all classes in which newly hired adjunct faculty are teaching. Depending on the number of new hires, IAR will either take a random sample (stratifying by academic discipline and participation in the on-line course), or a census. Those instructors who did not participate in the on-line course will act as the control group, and comparisons will be made between experimental and control groups as to the effect of the on-line course in student first impressions and course satisfaction. These students will be surveyed again at the conclusion of the course, and IAR will compare this to data gathered at the beginning of the course to determine changes over the course of the semester. IAR will also analyze student withdrawal and failure rates for students in these classes.

Although IAR will not be able to collect pre-test data from students, pre-test information from faculty in the experimental group will be collected for Cohorts Two through Six. Specifically, the faculty survey administered to Cohort One in the Spring 2001 will be available on-line, and faculty taking the on-line course will be asked to answer these questions before they begin the course. This will give IAR the pre-test information for the experimental group regarding their confidence level and connectedness to the college, in addition to information such as number of years teaching, gender, age, and academic discipline. The post faculty survey utilized for Cohort One will also be administered to all faculty in Cohorts Two through Six. IAR will not be able to conduct a comparison of syllabi from pre to post, as there will be no pre-training syllabi for instructors in Cohorts Two through Six. However, a content analysis of syllabi from instructors in the experimental group will be conducted to determine if recommendations from the on-line course have been incorporated, and comparisons will be made between syllabi from faculty in the experimental and control groups.

 

Questionnaire Construction and Related Issues of Validity and Reliability

            In close consultation with RCC and partners, IAR will develop customized survey instruments (for both students and faculty) to assess program effectiveness and student satisfaction.  In addition, the relevant literature will be reviewed for possible relevance for this project. IAR will then pretest the questionnaire and modify and revise the questionnaire where warranted.

            Special attention will be paid to issues of questionnaire reliability and validity. Questionnaire reliability - that is, consistency of results given a constant, unchanging population - of course, involves a trade-off between using an established questionnaire with proven reliability or using a tailor-made questionnaire which we believe will best meet the needs of RCC and partners. In summary, therefore, IAR recommends the use of a customized questionnaire for both the student and faculty surveys.  This customized questionnaire will allow for comparisons over time and enable RCC and partners to address its current research objectives.

            The initial construction of the questionnaire in consultation with RCC and partners will enable IAR and RCC and partners to develop a valid questionnaire designed to measure and evaluate the Faculty Development and Student Success Program. The resulting questionnaire should yield valid and reliable data relevant to RCC and partner's intended research objectives.

It is anticipated that the majority of the items will be “closed-item”. If needed, one or two "open-ended" questions may also be included on the survey to elicit additional information, which may not be available via the closed items. Based on our previous experience with in-class surveys, this questionnaire should be designed to last no more than 5 minutes. Non-response rates (in addition to instructor cooperation rates) dramatically increase for surveys longer than 5 minutes, thus reducing the validity of the results. In addition, all closed-item survey responses will be recorded on a scantron form in order to facilitate data entry and eliminate the possibility of data entry errors.

 

Student and Faculty Questionnaire Administration

At the beginning of each semester, IAR will mail the packet of student and faculty questionnaires to a designated person at each of the nine participating colleges with instructions for survey administration. Within the first three weeks of each semester, all newly hired adjunct faculty will be asked to complete a survey, in addition to all students in their class. All surveys will be administered during class time to ensure a high response rate. Instructors will collect the completed student surveys and place them, along with their completed survey in an envelope. These envelopes will be sealed by the instructor to ensure confidentiality, and returned to the designated person at the college, who will in turn mail them to IAR. Surveys conducted at the end of the semester will follow the same procedure, with only students completing a survey. Instructors will not be asked to complete a survey at this time.

 

Data Analysis And Presentation

            Data gathered from the student and faculty surveys will be edited, coded, and entered into the computer for analysis. The computerized data will be analyzed using SPSS (Statistical Package for the Social Sciences).  Descriptive statistics (i.e. frequency distributions, means, etc.) will be presented as needed to summarize the survey results.

 

Faculty Data

            Cohort One: Data collected from the Spring 2001 faculty surveys will be analyzed and summarized in terms of their level of confidence in teaching and their sense of connectedness to the college. Data gathered from faculty at the conclusion of the on-line course will be analyzed to determine the extent to which faculty understood and assimilated the material from the on-line course and to measure the degree to which they felt the course was useful to them.

            Once data has been collected from faculty in the Fall 2001 semester, all faculty data will be divided into experimental and control groups, and a paired difference test will be performed to analyze the effect of the on-line course on the instruction provided by these faculty as it relates to their confidence in teaching, attitudes toward the college, and the impact the on-line course had on the design and delivery of their courses between Spring 2001 and Fall 2001. This analysis will come from questions on the Fall 2001 survey in which faculty describe how they integrated the material from the on-line course into their course curriculum, and from the content analysis of their Spring and Fall course syllabi.

            It should be noted that other factors, such as instructors' modifying the way they teach the course simply due to their experience in teaching over time, may account for some of the difference from Spring 2001 to Fall 2001. This maturation effect is typically a function of time rather than a response to a specific event (such as the on-line course), and can compromise the internal validity of the experiment. Maturation, rather than the on-line course, may, in fact, be the major independent variable, that is the variable that produced the observed change over time. The experimental/control group design will allow us to evaluate the impact of maturation versus the on-line course on the observed changes.

            Because assignment to the experimental and control groups are based on self-selection (faculty choosing to take the on-line course) rather than random assignment, comparisons will be made between characteristics of faculty in each group on variables such as number of years of teaching, gender, age, and academic discipline. This will help to determine if the self-selection of faculty in taking the on-line course is random or systematic. If it is in fact systematic, this introduces systematic bias into the sample and the two groups are not comparable. In other words, the observed differences may not be the effect of the on-line course, but due to inherent differences between members of the experimental and control groups. Therefore, the major independent variable(s) may be instructor characteristics (such as years of teaching, motivation, gender, etc.) rather than actual exposure to the on-line course.

            Cohorts Two through Six: As with Cohort One, all faculty pre-test data will be analyzed and summarized. Data gathered from faculty at the conclusion of the on-line course will be analyzed to determine the extent to which faculty understood and assimilated the material from the on-line course and to measure the degree to which they felt the course was useful to them.

            Once data have been collected from faculty during their first semester of teaching, all faculty data will be divided into experimental and control groups, and a paired difference test will be performed to analyze the effect of the on-line course on the instruction provided by these faculty as it relates to their confidence in teaching, attitudes toward the college, and the impact the on-line course had on the design and delivery of their courses. In addition, a content analysis of each faculty's course syllabi will be conducted, and comparisons made between syllabi from faculty in the experimental and control groups.

 

Student Data 

Cohort One: Student data gathered at the beginning of the Spring 2001 semester will be summarized in terms of their overall impression of the instructor and their initial level of satisfaction with the course. Data gathered at the conclusion of the Spring semester will also be summarized, and comparisons will be made to determine if overall impression and satisfaction with the instructor changed over the course of the semester.

Fall 2001 student data will be broken down into two groups, classes in which the instructor participated in the program (experimental group) and those that did not (control group). Levels of student satisfaction between these groups will be compared, again using a paired difference test, to determine differences in student impressions of instructors and course satisfaction, both at the beginning and the conclusion of the semester. In addition, this data will be compared to data collected in the Spring 2001 semester (both the beginning and the end of the semester) to determine the impact of the on-line course on student impressions of instructors and overall course satisfaction. Finally, IAR will examine student withdrawal and failure rates to determine whether courses in which instructors participating in the program experienced a decrease in these rates from Spring 2001 to Fall 2001.

Cohorts Two through Six: Student data gathered at the beginning of each semester will be summarized in terms of their overall impression of the instructor and their initial level of satisfaction with the course. Data gathered at the conclusion of the semester will also be summarized, and comparisons will be made to determine if overall impression and satisfaction with the instructor changed over the course of the semester.

Student data will then be broken down into experimental and control groups. Levels of student satisfaction between these groups will be compared, again using a paired difference test, to determine differences in student impressions of instructors and course satisfaction, both at the beginning and the conclusion of the semester. Finally, IAR will examine student withdrawal and failure rates to determine whether courses in which instructors participating in the program experience lower rates than courses in which instructors do not take the on-line course.

 

FORMATIVE EVALUATION - FOCUS GROUPS

In order to enhance the findings of the faculty and student surveys (summative evaluation), IAR will conduct three focus groups with adjunct faculty in the first year, and two groups each in years two and three. These discussions will focus on the results of the summative evaluation in order to identify areas of program improvement, thereby leading to modifications and improved project performance. It is our experience that focus groups of this type enrich the data set immensely by providing the opportunity to delve more deeply into issues that remain unclear based on the survey results.  The following section outlines the scope of work for the proposed focus groups.

 

Methodology

            IAR will conduct three focus groups comprised of adjunct faculty hired in the Spring 2001 semester who participated in the on-line course and face-to-face workshops. Each focus group will consist of 10-12 members and last approximately one hour.  The Co-Directors of IAR will facilitate the group discussions.

            A semi-structured interview guide will be developed for each of the focus group discussions. As with the questionnaires for the student and faculty portion of the study, IAR will develop the interview guide in close consultation with RCC and partners. Modifications and revisions will be made as deemed necessary in order to elicit information consistent with the research goals and objectives of RCC and partners. Focus group discussions will be scheduled according to the most appropriate and convenient time for the participants, most likely on the weekend or in the evening.

            IAR typically prefers to conduct the focus groups in a neutral setting (such as CSUSB) so as to enhance the validity of the findings.  However, because of the wide geographical location of the participating colleges, an alternative setting can, of course, be arranged. 

           

Data Analysis And Presentation

     Data gathered from the focus groups will be transcribed, edited, coded and analyzed. This analysis will be incorporated into the findings of the student and faculty survey results in order to place those findings in interpretive context and enhance and validate the findings. A written report containing analysis, conclusions and recommendations for program improvement will be submitted to RCC and partners in years one and two as part of the interim report, and in year three as part of the final deliverables for this project.