The Role of Student Opinion Data

in Assisting Students in

Making Course Selection Decisions


Dr. Darwin Hendel, chair
Professor Avram Bar-Cohen
Corey Donovan
Professor P.W. Fox
Jason Frick
Professor Thomas Johnson
Tina Rovick



Report of the Subcommittee Appointed by
the Senate Committee on Educational Policy
Staff support provided by the Office of Planning and Analysis

May 1, 1997

University of Minnesota


This subcommittee report summarizes information about the possible use of student opinion data, collected through student evaluations of teaching, to assist students in making course selection decisions. The subcommittee based its discussion of student evaluation of teaching data on information provided by the Office of Measurement Services.

Most colleges and universities in the United States have a process whereby students evaluate the quality of instruction received. The purposes to be served by such processes are three: a) to assist faculty in improving instruction (e.g., feedback from students indicating that the instructor speaks too quietly); b) to be used as one type of evidence, along with other documentation about performance in the arenas of teaching, research, and service, by faculty committees and by departmental, collegiate or campus administrators in making personnel decisions ranging from whether or not to hire a new faculty member to making promotion and annual salary decisions; and c) to be used by students and advisors in helping students make informed course selection decisions. The three differing purposes demand that the content of the specific questions answered by students fits the particular purpose for which the results are to be used.

The University Senate has had a policy on the evaluation of teaching that dates back to 1973, although only the most recent 1992 policy is pertinent. That policy was developed within the faculty governance structure of the University of Minnesota, and demonstrates the commitment of University faculty to the evaluation of teaching. That policy requires that student surveys must be included as one source of information about a faculty member's teaching. The policy also requires that written student evaluations be obtained for at least one section of each of the courses taught during the previous year.

In brief, we have concluded that a two-fold concern exists, but that release of all of the results of the five questions required by the University Senate for student evaluation of teaching is unlikely to yield positive outcomes for either aspect of the concern. Instead, the subcommittee has concluded that there is a need to provide students with student opinion data for a more appropriate set of questions, and that the University should take additional steps to help students and their advisers make more informed course selection decisions. The detailed recommendations of the subcommittee are summarized in the section "Recommendations and Timetable for Implementation."

This report summarizes relevant materials and statistics gathered by the subcommittee and discusses the relative advantages and disadvantages of using the student evaluation of teaching data in course selection. Our conclusions relative to the three primary questions are as follows:


Usage of Student Evaluation of Teaching

Statistics provided by the Office of Measurement Services point to widespread usage and central processing of student evaluation of teaching forms. Five different forms are available for use:

The statistics in Table 1 below describe usage across all forms for 1993-94, 1994-95 and

1995-96. The most frequently used form is Form D, which accounts for almost 50 percent of the courses evaluated.

Table 1
Usage of Student Evaluation of Teaching Service

Year
1993-94
1994-95
1995-96
Questionnaires Processed 169,027 213,930 221,856
Distinct Summary Reports a 7,100 9,605 9,891
Courses b 3,692 4,341 4,259
Departmental Units Reported 114 118 117

a Refers to the number of distinct summary reports generated (e.g., several distinct reports for multiple section courses).

b Number of identifiable courses that were evaluated; included some duplication if more than one form was used in a particular course.



Senate Policy on the Evaluation of Teaching

Based on the recommendations of the Senate Committee on Educational Policy (SCEP) on May 14, 1992, the University Senate adopted a "Policy on Evaluation of Teaching Contributions" that included the following three major statements:

The intent of the policy was to develop a better means of evaluating teaching for the purpose of personnel decisions, and did not directly address the other purpose that might be served by the policy (e.g., to assist instructors in making course improvements).

On April 1, 1993 SCEP brought to the Senate for approval "Protocols for Student Evaluation and Peer Review of Faculty Teaching Contribution." The following numbered items were pertinent to the discussions within the subcommittee:

  1. Directions given on student evaluation questionnaires will include the following statement: "Your responses to this questionnaire are important because they will be used in tenure, promotion and salary decisions for your instructor. Your thoughtful written comments are especially requested, and may help your instructor improve future course offerings. The results of this evaluation (including the evaluation forms) will not be returned to the instructor until after the final grades are submitted for this course."

  2. The questionnaire will include the questions approved by the Senate (14 May 1992), plus space for additional items generated by the instructor or the unit. The form will ask for information on the student's major, gpa and class year, as well as whether or not the course is in the student's major and whether the course is required or elective for the student. There will also be a request, marked optional, for information on the student's age, gender, and race or ethnicity.

  3. Administering student evaluations will be the responsibility of each instructional unit. Student evaluations used in promotion and salary decisions will be administered at the beginning of a class period, during the last two weeks of instruction for the quarter. The evaluations will be handed out, completed, and collected without the instructor being present. It is suggested that a student be asked to hand out and collect the forms. Once collected, evaluations will be put in a S-led envelope or box and brought to the unit office, to be logged in and sent to the appropriate data processing center.

  4. Each campus will determine the appropriate manner of administering and evaluating student evaluation forms. To facilitate tabulation of the results of standardized questions on the student evaluation forms, each campus administration will provide the instructor and the unit chair with a summary of the data; the original questionnaires will be returned to the instructor. This summary will include appropriate statistical characterization of the responses to each question and, where a statistically meaningful data base exists, comparison to the responses for the same question on a campus, college, department, and program basis. To make comparative analysis more meaningful, there will also be comparisons on the basis of class type (e.g., large lecture, small discussion, laboratory, upper or lower division, elective, needed to meet university or major requirements). As resources permit, other types of statistical processing and comparisons may be added at the request of faculty or instructional units.

  5. To ensure that student teaching evaluation results are used with appropriate caution, tenure-home units shall be provided with a brochure summarizing current research on the meaning and usefulness of student evaluations (including questions of reliability and validity).

Of special interest is the last statement concerning the need for periodic reviews of the policy adopted by the Senate in May of 1992. Although the subcommittee was not charged with an overall review, much of the information in this report is pertinent to such a review.

Two years after this policy goes into effect, and periodically thereafter, both the overall implementation of the policy, and the value of its constituent elements (e.g., the standardized student evaluation mandated by the Senate in May of 1992) will be reviewed by SCEP, so as to bring to the attention of the Senate any changes that may seem needed.

Overview of Discussion

The two dimensions of the concern addressed by the subcommittee are: a) poor teaching in isolated situations does occur at the University of Minnesota; and b) students have a legitimate and unmet need to have additional information about the instructors of courses to enhance the quality of their course selection decisions.

Faculty representatives on the subcommittee were unanimous in their view that ensuring good teaching is an important administrative responsibility, and that those responsible parties need to be held accountable to insure that, in fact, good teaching does occur in each and every class at the University of Minnesota. It was not within the purview of the subcommittee to determine the extent to which such accountability is a current reality at the University of Minnesota, although concerns were expressed, especially by faculty representatives on the subcommittee.

Student representatives on the subcommittee noted examples in which the quality of the instruction was not adequate, and endorsed the need for administrative accountability for the quality of classroom instruction. At the same time students indicated that their primary reason for believing student evaluation of teaching data should be accessible to them is to assist them in making course selection decision, rather than to force change by making public students evaluation of teaching data. Some apprehension was expressed by faculty that making faculty evaluations of teaching public might decrease the likelihood that student opinion data would be collected, which, in turn, would decrease the likelihood that those responsible for ensuring that good teaching occurs would be able to identify situations in which poor teaching occurs.

When students are asked what information about instructors would be useful to them in selecting courses, it is not apparent that statistical summaries of previous groups of students' evaluations based on the five required questions would address their information needs. A March 12, 1997 Minnesota Daily editorial "Faculty evaluations should be public" by Greg Lauer, begins as follows: "Hopefully, I'll avoid the physics professor who speaks in tongues (he calls it quantum mechanics), the psychology instructor obsessed with Freudian development and the French prof who treats students like they are Americans in France. . .I know that who is teaching the class is just as important as what is being taught." It is quite improbable that the statistical summary for the five questions required by the University Senate would be useful to Mr. Lauer in answering the legitimate questions about who he will encounter when he begins next quarter. With the increasing use of e-mail at the University, it is easy for Mr. Lauer to ask the three professors the appropriate questions, the answers to which should help him avoid such instructors.

Considerable discussion in the subcommittee focused on the responsibility of the University of Minnesota to insure that high quality teaching is available in all courses. Both faculty and student subcommittee members agreed with the assertion that administrative responsibility and accountability was central to identify problems in instructional quality, and to assist those faculty whose course evaluations were below an acceptable level. There were differences of opinion, however, about the role of student access to instructor course evaluations in stimulating further improvements in the quality of teaching. Although such an outcome may occur, the stated purpose of giving students access to student evaluation of teaching data is to assist them in making course selection decisions, rather than to force improvement by making public the evaluations of instructors who receive very low ratings.

Although students provide the responses that generate instructor evaluation summaries, it is true that students are not generally aware of the results across types of courses and instructors, and have not had access to the results for particular courses. Although not an intended result of making course evaluation summaries accessible to students, students might become more differentiating in their responses based on their knowledge of overall distributions of mean scores for the five required questions for faculty in general.

Context

In response to a request from students at the University of Minnesota-Twin Cities campus about the "possibility of releasing teaching evaluations to students" the Senate Committee on Educational Policy (SCEP) appointed a subcommittee to address the following questions:

On March 14, 1997 the University of Minnesota Student Consultative Committee passed a resolution concerning the adding of student evaluations of faculty to the agenda of the Student Legislative Coalition. The text of the resolution, which endorses the process established by SCEP, follows:

Whereas the ability of students to access summary data of quarterly student evaluations would assist in the course registration process, would provide a great service to students and would hold faculty more accountable to students;

Whereas the student senate approved a resolution urging the faculty, administration, and the regents to provide a means for this student benefit without including the legislature;

Whereas the merits of available evaluations could benefit students, without allowing excessive consumerism that would negatively affect our University community morale, by working through the University Senate and circumventing further fracturing of an already fractious relationship with faculty;

Whereas the Senate Committee on Educational Policy is currently reviewing the most effective and beneficial process for implementing a policy making available, for student review, student evaluations of faculty, and has a subcommittee currently making progress towards this goal,

Whereas the proposed change in law effects all the students of the University of Minnesota as well as the students of MNSCU;

Whereas there has been no formal consultation of the Student Senate Consultative Committee and little consultation on the proposed changes to the data practices act outside of the Twin Cities undergraduate student association;

Whereas the Student Senate Consultative Committee represents the students of the University at large and not the individual campuses, institutes colleges, schools, or departments of the University;

Whereas the Legislative Agenda is subject to the approval by the Student Senate Consultative Committee, as stated in Article I Section 4 of the Student Legislative Coalition's Constitution;

Therefore be it resolved that the Student Senate Consultative Committee supports the work being done in the University Senate and specifically what is being done in the subcommittee of Senate Educational Policy Committee, and

Be it further resolved that the Student Senate Consultative Committee advises the Student Legislative Coalition not to add faculty evaluations to its legislative agenda at this time.


Summary of Student Opinion of the Quality of Teaching

To ascertain the overall level of student evaluation of the quality of teaching, the subcommittee was provided descriptive information by the Office of Measurement Services on the course means for the five required questions for 1993-94, 1994-95, 1995-96. As the results in Figure 1 indicate, the means for all five items were considerably above the "satisfactory" level (4.0 on the seven-point scale). The highest mean was for the item "Instructor's knowledge of subject matter," which at 6.25 is remarkably high given an upper level of 7.00 (if all students rated all instructors as exceptional in all courses). The means for "Instructor's respect and concern for students" is only slightly lower (5.91-5.95). The mean for "Instructor's overall teaching ability" is approximately mid-way between 5.00 and 6.00. The lowest course related mean is for the item "Amount of learning in class," which is only partly related to what an instructor does in a course. Although the three distributions in Figure 1 look similar, the consistency in means for the five items is even more apparent when the three years of data are presented separately for each of the five items, as is shown in Figure 2. These overall statistics are consistent with other statistics relevant to students' selection of courses (e.g., 3.8 percent of changes in course registrations occur as a result of instructional quality concerns).

Figure 1

Mean Student Evaluation of Teaching Class Averages for
Five Required Questions for Twin Cities Campus, by Year a

Instructor's Overall Teaching Ability
Instructor's Knowledge of Subject Matter
Instructor's Respect and Concern for Students
Amount of Learning in Class
Physical Environment in Classroom
1993-94
5.58
6.25
5.91
5.27
4.95
1994-95
5.61
6.26
5.94
5.28
4.93
1995-96
5.62
6.25
5.95
5.28
4.94





Figure 2

Mean Student Evaluation of Teaching Class Averages for
Five Required Questions for Twin Cities Campus, by Year a





Additional information about the distribution of the class means suggest some variability in the course means, as is illustrated in Figure 3 below, that portrays the percentages of mean scores in seven score ranges for the question "How would you rate the instructor's overall teaching ability?"

Concerns were expressed by subcommittee members about those courses/instructors with means below the level of "satisfactory" (4.00 on the seven-point scale), which account for less than 10 percent of the ratings.

An even more detailed picture of the means for the overall teaching item is presented in Figure 4. Two points are worth noting. First, the number of courses in 1995-96 with mean ratings of "very poor" numbered 44, which represents one percent of the courses/instructors in the data set. Second, the frequency distribution indicates the very large number of courses with means clustered around the scale points (e.g., 6.00) and mid-points (e.g., 5.50). This finding suggests that the differences among courses/instructors are negligible at certain scale points, and, therefore, the likelihood of the data being useful to students making decisions among those instructors is non-existent. However, if the choice involves one instructor at one cluster (e.g., 6.0) versus another instructor at a different cluster (e.g., 5.0), a meaningful difference is likely to exist.

Figure 3

Distribution of Mean Scores on the Overall Teaching at University

of Minnesota Instructors, 1995-96







Figure 4






Options Considered

The subcommittee identified several options to respond to the request that student evaluation of teaching results be made available for students to assist them in course selection.

Instructors and Courses to be Included

In addressing the first question, the subcommittee sought guidance from the Office of the General Counsel concerning privacy issues surrounding the student evaluation of teaching data currently being collected and summarized in accord with Senate Policy on the Evaluation of Teaching. As a public institution, the University of Minnesota is obligated to make all data available unless the data are considered as private from the perspective of faculty as employees of the University of Minnesota or from the perspective of students concerning their own educational record and performance.

The Minnesota Government Data Practices Act indicates that only certain data about a faculty member is considered as "public" data, for example, name, salary, position, educational preparation, and discipline, all of which are a matter of fact rather than evaluation or interpretation. All other "data" about faculty not specifically identified by law as "public" are considered private and cannot be made public without the consent of the individual employee. Unless student evaluation of teaching data were added to the list of public data by the Minnesota Legislature the University of Minnesota would be in violation of Minnesota's privacy law if faculty were required to make student evaluation data available to students to assist students in course selection. In a few other states, student evaluation of teaching data are considered to be in the public domain.

Individual faculty members could sign a written consent form permitting release of student evaluation data to assist students in course selection, which has been done in those parts of the University of Minnesota (e.g., Carlson School of Management) that already give faculty the opportunity to release student evaluation of teaching data to assist students in the course selection process. The subcommittee considered four options and concluded that option 4 was the preferred option.

Option 1: Required Release of Results for all Faculty and Courses

To require that student evaluation of teaching data be made public could be compared to the following: requiring that all preK-12 teachers make available the evaluations of their teaching by the parents of children in their classes to assist in teacher selection; requiring that all social service agencies make available to the general public the client satisfaction results from surveys administered within those settings to enable clients to select agencies; requiring that elected officials make public the results of surveys of constituency satisfaction surveys to assist voters in the voting process; requiring that all drivers license examiners make available to test takers the comments of examinees tested by the particular examiner to assist would-be-drivers in choosing an examiner.

This option would be considered to be a violation of current Minnesota Data Privacy Laws. Moreover, even if it were legal the required release would not be desirable for the following reasons:

Option 2: Voluntary Release of Required Questions for any Interested Faculty Members

In this option, all faculty would be invited to participate and those faculty who are interested in participating would sign a release form authorizing the Office of Measurement Services to provide to a designated office a copy of the statistical results of the five required questions for the course being evaluated. The faculty member would specify where the results would be accessible to students: in their office; in a departmental office; in a collegiate office (e.g., advising office); or some other campus-wide office.

Option 3: Voluntary Release of Required Questions for Subset of Faculty and Courses

In this option, faculty would be invited to release student evaluation of teaching data for only certain types of courses (e.g., courses that fulfill campus liberal education requirements) for which there is a possibility of student choice. If a broader set of results was deemed desirable, no restrictions on types of courses would be imposed.

In the discussions in the subcommittee and in previous discussions within the Senate Committee on Educational Policy, several concerns were expressed about the possible negative effects of widespread release of results of the five required questions for certain types of instructors (e.g., teaching assistants who are unlikely to teach the same course more than once and assistant professors who are new to teaching and who may not yet have had a chance to refine their teaching skills). In this option, only associate and full professors would be invited to participate if they wished to do so. Moreover, there are numerous required courses that are taught by only one faculty member and for which there is no possibility of the data enhancing the student choice process.

Option 4: Voluntary Release of Public Release Items

One of the more perplexing issues addressed by the subcommittee involved the differing purposes to be served by student evaluation of teaching, and the need to think about the implications of differing purposes relative to the content of questions asked of students. The only limitation in terms of participation would be the exclusion of teaching assistants, who typically do not teach the same course repeatedly and who are actively involved in developing their expertise as teachers. The section below "Content of Questions to be Asked of Previously Enrolled Students" suggests a strategy for developing a more appropriate set of questions.

Content of Questions to be Asked of Previously Enrolled Students

It became apparent in discussions in the subcommittee as well as previous comments in SCEP that there are concerns about whether or not the five required questions, or the longer list of instructional improvement questions as well, are the "best questions" that if asked and responded to by previous students would, in fact, address the legitimate questions that students may have about instructors they do not know. Those institutions that have a system in place have tended to use the standard items found on most student evaluation of teaching instruments used for personnel decisions and instructional improvement purposes.

Anecdotal evidence suggests that there is a wide variety in the questions students would like to know about possible instructors: five undergraduates were asked by the subcommittee chair the question and each of them gave quite different answers. One wanted to know if students thought the grading was fair. Another wanted to know if other students would recommend the course. Another student wanted to know if previous students thought the course objectives were achieved. Another wanted to know if students fell asleep in the class. Another student did not think the results would be useful since students do not take the process seriously.

The great variation in the responses noted above suggests that a system of "five questions for all" does not directly address the information needs of students as they think about course selection. Instead, what might truly help the course selection process is for more students to have easy access to faculty to ask them directly about a particular aspect of the course. The Course Guide provides good information, but does not include the detailed information considered useful by students interested in a particular course. An "Ask your Professor" option could easily link students with the faculty teaching a particular course, and could be implemented electronically.

In an attempt to get a broader perspective on the issue, a convenience sample presented itself in mid-March: a list of names and ID numbers of students who signed a petition in support of making the results of evaluation of teaching available to students to assist in the course selection process. That list could be used in spring quarter 1997 to obtain students' perspectives on a new set of course opinion questions.

Given that the five required questions are used in making personnel decisions, it has been deemed necessary that those questions must allow finer distinctions amongst faculty, thereby necessitating a seven-point response scale. Such fine distinctions are unlikely to be necessary or useful in providing students with a reliable overview of the opinions of previously enrolled students.

One possible framework to use in developing a more appropriate set of questions to guide student course selection is to focus on those principles that have been demonstrated to contribute to student learning, and to translate those principles into questions that might suggest how faculty and students in a given course address those factors. The following seven principles have been adapted from Gamson and Chickering's "Seven Principles for Good Practice in Undergraduate Education" (AAHE Bulletin, March 1987, pp. 5-10).

Since it became apparent that there is little published literature on the content of student evaluation of teaching questions that are most likely to be useful to students, colleagues in instructional development and evaluation were contacted via the POD Network electronic mail list. Their suggestions for specific questions/types of questions are as follows:

The aforementioned resources were used to propose a possible set of items for inclusion in Form PR (Public Release), possible items are listed below.

Yes
No
Uncertain
I would take another course with this instructor


I attended almost all of the class sessions during the term.


Students who want a structured learning environment should take this instructor


The course syllabus / course outline was a fairly accurate description of learning activities in the course.


Students who do best in less structured courses should take this instructor.


The instructor used a variety of teaching and learning strategies in the class.


The instructor provided me with timely and helpful feedback about my performance in the course.


The instructor set high expectations for student performance in the course.


Lectures and other in-class activities contributed most to my learning.


The learning objectives for the course were well achieved for me


Students usually participated in classroom discussions.


I met with the instructor sometime during the term (during office hours or at other times) and/or communicated with the instructor by e-mail or telephone.


Options for Access and Distribution

If faculty are invited to participate in the voluntary release of student evaluation of teaching data, the next set of questions address issues of the cost and procedures for making those results accessible to students. It is true that students in parts of the University already have access to student evaluation of teaching summaries (e.g., Carlson School of Management, Humphrey Institute, and for selected courses in the Medical School), but it is also true that it is relatively difficult and time consuming for students to make use of the results. Typically, a student must go to a particular office on campus and page through the detailed statistical summaries of results provided by previous groups of students.

Figure 5 below suggests that the options for access and distribution involve questions of both cost and usefulness of the information. The costs are likely to be minimized and the usefulness maximized if the student opinion results were made available to students electronically and imbedded in the Home Page of those interested faculty members, and connected to the information already available in The Course Guide. The review of practices at other institutions that make results available electronically provides models to guide implementation at the University of Minnesota.

Currently, there are four different standard forms that may be used, but all forms contain the five questions required by the University Senate. The forms differ in the nature of the additional questions and the flexibility in the number of additional questions that may be included. The additional questions are those most likely to be useful to the instructor for instructional improvement purposes (e.g., questions about lectures, quizzes and associated reading materials). It is important to note that none of the items on any of the forms were developed explicitly because of their likely usefulness to students in making course selection decisions. The subcommittee recommends that a new form (Form PR- Public Release) be developed that includes those questions most likely to provide useful information to students. Although additional consultation is needed before the set of questions is finalized (including some formal feedback from a sample of students), the subcommittee has developed a draft version of the questions (Attachment A). The estimated cost is $900 to prepare and obtain 15,000 copies of the machine readable Form PR (Public Release). Those faculty who wish to make student opinion data available to students would sign a release form when the completed surveys are returned for processing by the Office of Measurement Services (Attachment B), and would receive the summary results prior to the public release.

Figure 5
Alternative Approaches for Using Student Opinion Data
to Assist Students in Course Selection




Recommendations and Timetable for Implementation

Phase I: Design of WWW Reporting Framework (Spring-Summer 1997)

Phase II: Communication and Initiation (Fall 1997)

Phase III: Gradual Implementation (Winter 1998 and beyond)


Literature on Student Access to Student Evaluation of Teaching

An article "Evaluation by Students: Concept Comes of Age" in the December 2, 1996 New York Times began "It used to be that what students had to say about professors was unprintable, it remained that way. No longer. The evaluations are used to improve teaching, to help students choose courses and to assist faculty and administrators in promotion and tenure decisions." The following quote from Wilbert McKeachie, associate director of the Center for Research and Post Secondary Teaching and Learning at the University of Michigan and past president of the American Psychological Association commented "People tend to think of teaching as just involving content. But teaching is about getting the knowledge to students' heads and it is hard to find anyone who is a better judge of whether they are learning something than the students themselves."

The article commented as follows about informal course guides;

"In addition to the institutionalized procedures for faculty evaluation, informal course guides are also published and sold by students at some universities. These are usually every bit as disrespectful as any beleaguered faculty member ever dreamed possible. One of the oldest of these is the "Confidential Guide to Courses," or the "Confi Guide" put out by the editors of The Harvard Crimson. "The first purpose is to be humorous; the second purpose, if it can't be humorous, is to be interesting, and last, we're trying to be informative," said Jonathan Moses, managing editor of The Crimson. We are not like the "CUE Guide," referring to the official university course guide.

At Stanford University, the official course evaluation guide is being computerized. Soon students will be able to use a terminal to find out all the courses taught on a certain day of the names of the courses taught by a certain professor or what requirements a given course fulfills. At the end of each course description is a student evaluation section.

"At the end of each quarter, the registrar sends out evaluation forms with questions like, 'What are the best things about the class? How would you rate the course­heavy, light or moderate? What would you say to a student taking this course?'" said Alice Lee, a student who is working on the project.

A recent article "Academic Freedom, Tenure, and Student Evaluation of Faculty: Galloping Polls In the 21st Century" by Ralph E. Haskill, University of New England in the February 1997 issue of Education Policy Analysis Archives contains the following brief section on releasing SEF data to students and the public:

In exploring possible legal implications of SEF, it should be made clear that I am not an attorney and approach this section on the basis of the "reasonable man" legal standard. To begin, some faculty believe that due process and defamation issues are involved in SEF (Crumbley, 1996). It has been suggested that faculty are entitled to at least the same rights as students. The Fourteenth Amendment requires, for example, due process before a public institution may deprive one of life, liberty, or property. Given the problematic nature of SEF, due process is in question. In a university, a faculty's reputation is considered a liberty right, and for tenured faculty the courts have pronounced the possession of tenure a property right. Presumably, any inappropriate action depriving faculty of these rights would be open to legal action.

Though it is illegal to post a student's grades using a social security number or date of birth on the majority of campuses scientifically questionable SEF and other anecdotal student remarks about faculty teaching are not only used in determining faculty salary increases, promotion and tenure decisions, they are openly published on some university campuses and sanctioned by some administrators and state government officials. In what many faculty see as an outrageous attempt to control the academic classroom, some state governments have sanctioned the release of SEF to the campus community and in some cases to the general public by publishing faculty student evaluations on the university's world wide web pages, thus making them not only available on campus but globally.

Case in point: At the University of Wisconsin, the Chancellor refused to release the SEF, citing a statute allowing personnel evaluations to be withheld from public view. The students took the chancellor to court. However, after being advised to do so by the state's Attorney General, citing Wisconsin's open-records law, the University of Wisconsin's campus will open students' evaluations of professors for public view. To the credit of the student and faculty senates, they passed resolutions in support of the Chancellor's refusal, and the university's lawyer concurred. Despite these resolutions, the Attorney General disagreed, writing that "the requested records are public records and the university's stated reasons for withholding access do not outweigh the public interest in the records" (Chronicle of Higher Education, 1994a, 1994b).

Other schools also published SEF. One recent survey of accounting departments found that 11.4% of the respondents indicated that SEF scores are made available to students (Crumbley and Fliedner, 1995). Indeed, a search using "faculty evaluation" on the world wide web will return numerous examples of published SEF. All this while faculty are restricted from divulging information on students (see Pennsylvania State University, 1996). Articles are, however, beginning to appear that question the legality of publicly releasing SEF (Robinson and Fink, 1996).

It has been suggested that if a university damages a faculty's reputation by publishing false and anecdotal data from SEF, faculty should able to sue for libel or defamation. The concept of defamation typically refers to communication that causes a person to be shamed, ridiculed, held in contempt by others, or their status lowered in the eyes of the community, or to lose employment status or earnings or otherwise suffer a damaged reputation. Legally, while defamation is governed by state law, it is limited by the first amendment (Black, 1990). According to one source, however, the courts have generally protected administrators from defamation charges resulting from performance evaluations (Zirkel, 1996). It would seem, however, that these older precedents applied when administrative evaluations were conducted in private and not publicly distributed.

University administrators are often allowed to release SEF to students when the release of personnel information is apparently allowed in no other phase of personnel or other key management functions. An Idaho ruling upheld the release of SEF to students by reasoning that students were not the general public and therefore faculty evaluations were not protected under the privacy rights of the Idaho Code (Evaluating Teacher Evaluations, 1996). Given such apparent breaches of confidentiality and privacy, it will be instructive to see how the courts will continue to rule. It would seem that a university should be held responsible for insuring that data made public are valid.

Finally, in typical personnel evaluations, professional validation studies are not permissible unless shown by professionally acceptable methods to be "predictive of or significantly correlated with important elements of work behavior which comprise or are relevant to the job or jobs for which candidates are being evaluated." In Title VII of the Civil Rights Act of 1964 the employer must meet "the burden of showing that any given requirement (or test) has a manifest relationship to the employment in question" (in Griggs v. Duke Power Co., 401 U.S. 424 (1971). "In view of the possibility inherent in subjective evaluations, supervisory rating techniques should be carefully developed, and the ratings should be closely examined for evidence of bias" (EEOC Guidelines, 99 CFR 1607.5 (b) (4). (in Crumbley, 1996).

Practices in Place at Other Institutions

The five questions required by the University Senate for the University of Minnesota are quite similar to student evaluation of teaching questions used at other institutions. An informal survey by the Minnesota Student Association (MSA) in the fall of 1996, of other Big Ten institutions (Indiana University, University of Wisconsin, University of Michigan and Penn State) concluded: "Note that all of these schools release the evaluations to the students. Some schools release all evaluations. Others do not." Further review of information collected by MSA provides the following additional descriptions of the systems in place at these four institutions. To get a broader picture of the use of student evaluation of teaching data to assist in course selection, a request for information was made to institutional research colleagues (AAUDE; American Association of Universities Data Exchange) and organization of 42 research institutions. Responses have been received from 19 institutions as of April, 1997 and are also summarized below. The descriptions are organized according to the type of system in place at the institution (e.g., whether the information is collected separately by student organizations or whether institutionally collected data are used).

No System

Iowa State University Information on student evaluations of instructors and courses is not provided to students, although a short-lived process was in place in the 1970s. The Government of the Student Body has renewed interest in this possibility.
University of MissouriNo systematic process in place
CornellIt's all word of mouth
University of KansasResults are not available to students, although student associations currently are pursuing the possibility.
University of Nebraska-LincolnResults of teaching evaluations are not available to students. Over the years, student associations have initiated processes to rate faculty and distribute results, but nothing has survived long enough to become institutionalized.



Student Association System based on Separate Data Collection

Indiana University Sponsored by Indiana University Student Association (IUSA)\

Data from students enrolled fall 1995 semester
Separate IUSA request to instructors to survey classes
Access http://silver.indiana.edu/~iusafce
Twelve common questions: four on courses, eight on instructors
Indication of number enrolled, number surveyed
Four-point scale: 0=strongly disagree, 4=strongly agree
Organized by college/department/course/instructor
Means for items are graphed
Course score and instructor score
Offerings by same instructor repeated

University of VirginiaA paper based system exists that is separate from data collected by the institution
Penn State UniversitySeparate surveying process by Academic Assembly
Sponsored by the Student Book Store
Focused on general education courses
Combines course information and evaluations
Seven standard questions and student comments
Percent of respondents reporting positive evaluation
Includes percent of students who drop or fail
Indicated when students surveyed
Class sample noted and class size
Section results combined
Organization by department/course/instructor
University of North CarolinaEvaluation of teaching is not required but most departments/schools do so. Occasionally, a separate process funded by student government produce and distribute The Carolina Course Review, results of which can be accessed on the institution's home page.



Systems Based on Institutional Data, Voluntary Faculty Participation, but No Standard Questions

University of Toronto Different faculty use different course evaluations. In the largest unit (Faculty of Arts and Sciences) the effort is jointly sponsored by the Faculty and the Arts and Sciences Student Union. Summary data are published in the "Arts Calendar" and 10,000 copies are distributed.
University of Wisconsin-MadisonSponsored by the Associated Students of Madison

To be used in course decisions for spring semester 1996
Information from two previous semesters
Questions used vary among departments
Focused on comparisons between professors teaching same course
Five-point scale, means reported
Organized by department/college/instructor
Each identical course offering by instructor listed separately
No questions in common to all evaluations
Unclear about separateness of data collection process
Paper only summaries provided to students

Ohio State UniversityStudent government organizes the dissemination of the student evaluation data, which is included as an insert in the campus newspaper. Only professors who volunteer to have results published are included, but more and more volunteer. http://www.acs.ohio-state.edu/students/usq/ Choose "Projects" option, then "Student Teaching Evaluations"
Sponsored by Undergraduate Student Government
Only results from Student Evaluation of Instruction used by more than 30 percent of classes
Permission to publish obtained from instructors
Evaluations from only Spring 1995
Organization by department/course/instructor
Includes number of enrollees and survey respondents
Includes: class standing, GPA breakdown, and enrollment reasons
Five-point scale: 1=strongly disagree, 5=strongly agree
Means rated for each question
Overall rating, numbers in each category
University of California - IrvinePlans are being discussed to develop one standardized form.

Currently, they have two sources of teaching evaluations. One is the voluntary participation of faculty members with Instructional Resources Services, a central office. This questionnaire consists of 26 yes/no items rated on scantron answer sheets and four short answer questions. Results are returned to the faculty member and are not used for assessment. Faculty members who agree to participate have their evaluations published in a handbook that is made available to students.

In addition to the "on request" service, some academic departments distribute course evaluation forms that may be used for personnel assessment. Results of these evaluations are not publicized.

University of MarylandThe system is quite decentralized, and there are different forms for collecting student evaluation data in different departments and colleges. In the College of Business and in the College of Library Science, they do make student evaluation of teaching data available to students. In others, only by practice of individual faculty.



Institutional Data Used with Standard Questions, Voluntary Participation of Faculty

University of Iowa Statistical results of six questions in the University administered course/teacher evaluation forms are made available in a paper based reporting system to students. The six questions are:
  • The course requires an appropriate amount of work for the credit earned
  • The instructor increased my interest in the course material
  • Exams in the course were fair
  • The syllabus was an accurate guide to course requirements
  • The instructor clearly communicated class material
  • Overall this was an excellent course
Northwestern University Electronically accessible. A system of evaluation of instructors by courses is based on data collected and reported by the Registrar's Office. http://nuinfo.nwu.edu/academic/ctec/ Message "Forbidden you do not have permission to access."



Institutional Data, Results for Standardized Questions Available Electronically for All Faculty

University of Colorado Information about Faculty Course Questionnaires are accessible at the web address www.colorado.edu/SARS/FCQ
University of FloridaA system is in place that is based on data collected by the institution, and summary results are available electronically. Summary data can be accessed at http://www.aa.ufl.edu/
University of WashingtonAll student ratings data are open for public disclosure. Recently, ratings from selected items have been made available on the Internet. Ratings for teaching assistants who lead lab or discussion sections are excluded. http://www.washington.edu/oea/



Institutions with Systems Under Revision

It became apparent in reviewing comments from various sources that Northwestern University's Course and Teacher Evaluation Council (CTEC) represents one system that has been in place for some time and is often mentioned as a good system. CTEC functions within the Office of the University Registrar. CTEC began as a student government committee in 1971 and currently operates under the direction of the Provost's Office. CTEC evaluates over 700 undergraduate and graduate courses each quarter and places a compilation of these course evaluations on NUInfo. The purposes of CTEC are three-fold.

That system was reviewed during the 1995-96 school year and proposed changes were implemented beginning fall quarter 1996. The most pertinent of the proposed changes are as follows:

Currently there is a process to provide student opinion data to students at the University of Wisconsin-Madison; however, the overall policies on evaluation of teaching differ significantly from the 1993 University Senate. First, at the University of Wisconsin there is no campus-wide uniformity in the content of the student opinion survey questions. Second, and as a result of the first difference, the information available to student uses information collected by individual departments. The issue of the use of student course evaluations and an evaluation of the pros and cons of standardization across departments is being explored by an Ad Hoc committee that will report to the Faculty Senate the currently available electronic versions can be found at: http://www.wisc.edu/advise/

A January 1997 Progress Report summarized issues and comments raised in the public hearing and other meetings of the committee. The following summary statements are especially pertinent:

Previous Efforts to Provide Opinion Data at University of Minnesota

The current interest on the part of the Minnesota Student Association to develop a system of collecting student opinion data and making them available to students is not new. Starting as far back as the early 1970s, various approaches have been piloted and implemented, but none of them have continued at the University of Minnesota-Twin Cities. Some attempts have been initiated by students but with vocal opposition from the University, and some have proceeded as a joint student and administration effort (e.g., the University Course Information Project.) By way of contrast, there are numerous other institutions at which there is a continuing tradition of such summaries being available. For whatever reasons, such a publication has not become a tradition on the Twin Cities campus.

It is important to note that the current information reported in The Course Guide, available both in paper and electronically as part of the Office of the Registrar's Home Page, had its beginning in the late 1970s as the University Course Information Project (UCIP). UCIP included student opinion data as well as the information that is now "institutionalized" in The Course Guide, and was funded as a special feel through the Fees Committee that annually assesses student services fees for the following year.

Although The Course Guide provides instructor provided information about how he/she sees the course being taught, other types of information are needed to enable students and their advisors to make even better choices of courses. The other questions deal with issues of teaching styles and student participation, since high quality instruction requires that both the instructor and students be engaged in the process. That premise suggests that although the majority of questions in Form PR (Public Release) might address instructor differences, some questions should focus on students' responses to the course.

Information Needs to Guide Course Selection

Students have access to several types and sources of information to assist them in selecting courses: collegiate bulletins, class schedules, The Course Guide, and recommendations from faculty, professional advisors, and fellow students. Although printed materials contain descriptive and factual data, recommendations often add an evaluative dimension based on an individual's opinion or a collective impression based on comments of others.

The selection of courses is perhaps the most fundamental set of decisions students make about their education, and it is incumbent on the institution to enhance the quality of these decisions. Students ought to be encouraged to make decisions based on accurate and complete information about the course, its learning activities and objectives, and how a particular course is taught.

In the past decade, several studies have focused on aspects of the course selection process that have implications for the discussion of the role of student opinion in the process. A 1990 study of changes in registrations on the Twin Cities campus was based on an extensive list of 63 specific reasons why students change their initial registration. Across all types of transactions, the highest numbers fell into three of the new categories: "registration and course availability," "course information and content," and "changes in credits and grades." Taken together these three categories accounted for 52.2 percent of the total number of course change transactions. Other more specific results were as follows: (a) the main reasons for changing sections were due to "registration and course availability" "issues" and "work conflicts;" (b) the primary reasons for dropping a course were in the categories "changes in credits and grades" and "course information and content;" (c) for those adding courses, the main reason fell into the categories "registration and course availability" and "course information and content;" (d) for those canceling their entire registration, the primary reasons concerned "miscellaneous and personal reasons" and "work conflicts;" (e) in the case of changing credits, most were in the categories "changes in credits and grades" or "registration and course availability;" and (f) for changing grade options, the most frequently given reasons were in the areas of "changes in credits and grades" and "course information and content."

Figure 6 below presents the relative percentages of course changes attributable to various types of reasons. Varying course information and content issues accounted for 17.4 percent of the changes, whereas perceived concerns about instructional quality accounted for only 3.8 percent of the changes in registrations.

Figure 6
Frequency of Types of Reasons for Changing Registration
Regrouped into Ten Categories


In planning for the University Course Information Project in 1979 (now known as The Course Guide), the Focus Group Interview Report: Undergraduate Student Needs for Course Information and Reactions to the Course Information Models was commissioned by the Minnesota Student Association. Three focus group interviews were conducted with 14 students, and 20 faculty advisors participated in three other focus group interviews. Two questions were posed: (a) What information do students need to choose classes?; and (b) What is the best way to get course information to students?

The following text and specific comments from students were taken verbatim from the report:

When the students were specifically asked which type of information was most helpful to them in selecting courses, they generally seemed to feel that the input received from students who had previously taken courses was the most useful form of information. Some of the comments were:

I think word of mouth is (the most useful) because someone's actually done it and it's not written down What's written down can be totally different from what's in the class. The guy who wrote it (the course description) might have a misconception or the guy who's teaching it might decide to vary and if you've had someone that's already taken the class, they've actually done it and they know what's going on firsthand. So that's the most reliable form of information.

I agree pretty much with word of mouth. That is for me the most reliable way of knowing what a class actually going to be like…

I guess I'd go with trusted friends' opinions and things like that...

I think I more or less go by my friends advice, because if they've already taken the course and if they've liked it, more than likely I would too. But I don't just talk to one person. I try to get like two or three people that have gone through the course already and get their input on the class so that it's not just biased by one person.

I use the scheduling book and word of mouth from some upperclassmen who have taken most of what's in my major, so I just talk to them.

While the students who participated in the interviews appeared to be in consensus on the importance of the information they received from other students, they also noted that the value of the information had to be weighted in terms of how well they knew the other students and how much they respected their opinions.

Information about course expectations and the teaching performance of professors and T.A-'s was also used by the participants. However, information of this nature did not seem to be of as much importance as the types of information specified earlier in this section. Such information, when used, appeared to be obtained primarily from other students.

The responses from the participants fell into three major categories. Students expressed a need for 1) additional information concerning course expectations and assignments, 2) more information about course content, and finally 3) information about the professor or T.A. teaching a course.

Of these needs, the most pressing, was more information on course expectations and assignments. Students were interested in accessing this type of information so they could better balance quarterly schedules and course work loads. It also seemed that having this type of information would better allow them the opportunity to select courses that matched their specific learning styles. Comments expressed concerning this need were:

I'd ask for another way of rating the difficulty of classes, because I've taken a 1000 level class that's been harder than some of my 3000 level classes. That's kind of deceptive. You sign up for a 1000 level anything and you think, and you may be wrong, that it's an easy class, that it's a freshman class. It should be an easy class and then you get in there and it takes so much time and blows you away compared to some of your other classes. You need some way to gauge how much time you're going to spend on some of these classes.

Maybe something on like a difficulty range, You know, like if somebody thought maybe it would be better to do it when you're a junior versus a freshman or something.

And maybe even a little bit about the format or something about how it's run or if it's four quizzes and one exam. It would take up a little sentence in the description. It wouldn't take up that much space.

I might want to know what's required of you before you sign up for it and go sit down and get your syllabus. You know, because like my Islam class, one thing that I don't like about it is you're graded on two tests and that's all you're graded on. I would prefer to know that a head of time because maybe I wouldn't have taken it, which might of been a mistake since I enjoy it. But Id like to know if you're expected to write five papers before you walk in there or if you're expected to have one test and everything is based on that one test.

The need for more information about course content was expressed by a number of participants. Their major concern centered on having more information about what s actually covered in a course and what the focus of the course is. They noted that the then current catalog course descriptions were sketchy. It appeared that these students were most interested in having more detailed, accurate descriptions available so as to help eliminate the problem of enrolling for a class and finding out that it was considerably different than the catalog description. Some typical comments related to the need for more course content information were:

What I think they should do is take all the classes, if not all of them (then) the ones that you're obviously going to end up taking, and give you a paragraph or so, rather then just the two sentences that they have in the bulletin and tell you exactly what the course is and what it involves rather than just generally what you can expect on the first day of class. I think that will help a lot.

I think a little bit more of a descriptive course detail (would be helpful). I'm planning ahead now and I'm thinking about taking a physical geography class which sounds just about the same as the geology course I'm taking now. It's like I'd like to know the differences when you look at the descriptions, they're really close.

The course content previous to registration would be helpful either in the course directory or the listings in the boards book or something...

Well maybe, at least the people would know what they would be getting into (if they had more course content information). I'm thinking of classes that I've taken that just blow me away by the great difference between what was actually taught there and what I thought would be taught…

Some participants thought that information about the professor or T.A. teaching a specific course would be useful. It should be noted, however, that information about the instructor was a much lower priority than information about course content and expectations. Specific comments relating to this need were:

I'd like to know a little bit more about the professor, but there doesn't seem to be a lot available. You know it tells you the name of the instructor but not much beyond

Maybe if it comes down to like a choice between my French classes, it would be a choice between T.A.s who teach the lower level French classes. Maybe even a brief summary of where they're from and where they've been educated and how long they've been teaching. One of my biggest problems with one of the French T.A.s that I got was that she was African and she didn't speak very well in English and so she just totally didn't speak English....

Like how effective they (students) thought the professor was or I guess how the information was presented ...if they thought the information was useful and that sort of thing

You know, I would like to know some about the profs, too. You know, some kind of evaluation. I would like to see an overall evaluation compiled by the students and their opinions and why they feel that way. You know because there are different teaching styles and I learn better under specific teaching styles because I'm an individual and it would make it a lot easier if I could specifically pick out the classes I could learn better in.

Advisors have an opportunity to see a variety of students, provide advice on a range of topics, and are able to provide valuable insight into the types of information students need. Topics discussed with advisors centered on their need for information when advising students on course selection and students' need for course information. The following text focuses on the advisor's views.

Within the major, advisors had few problems advising students on courses. They felt they were experts in their subject area, particularly if they had been in the system for a number of years. Advisors were familiar with the sequence in which courses needed to be taken, were acquainted with faculty members teaching courses, were aware of how classes operated, and were familiar with course content.

However, advisors expressed frustration with a lack of knowledge and information about courses offered outside their department. Much of the information they had about courses outside the major came from talking with students. In many cases, advisors felt they were not able to provide students with adequate information about courses outside the major. Comments included:

For majors it's fairly easy because I feel like an expert and I know how to keep them in sequence. I've learned how to solve little problems if they get out of sequence- The place that's really frustrating and I don't know what the solution is, is when I as a Physics Professor am trying to give the student some good ideas about what to take for liberal education courses. We've gotten ourselves into such a huge smorgasbord of courses that I don't really know what is good, who is a good professor, or whether writing is going to be required in the course. I feet I don't do the job.

1'm in the sciences and it's difficult to advise students on what courses they might take in these other subjects that we aren't conversant with Usually I advise students in the major. The source of information I have is talking to students about courses and learning from friends who have taken courses from those professors. It's the word of mouth information you get.

I've always found it almost impossible to give knowledgeable advice about the particular tracks within the liberal education program or even an approximation of what courses are like. There are just so many of them. And I'm just not often brought into close contact with them. About all I know is what I've learned debriefing students.

Advisors were asked, "If you could have any type of information you wanted to better help your advisees make decisions about what courses to take, what would you want?" They wanted more elaborate course descriptions and better information about when courses will be offered. Advisors wanted more information about the workload in the class, the requirements, the teaching methods used, and a more detailed description of course content. A number of advisors also mentioned that it is difficult for students to do long-range planning when they have little or no information about when courses will be offered.

As detailed a description as is possible about the course. The bulletin has minimal information. Some departmental advising offices keep on file more detailed course descriptions.

Course content. Like what's being offered in this course and what are the expectations of students, what kind of knowledge do they need to enter the course and perform successfully. So an elaborated bulletin description would be very useful

Longer range information about when courses are available. I work with students who are designing their own majors and so in a sense the whole CLA bulletin is their menu. But it's like going to a restaurant where half the things on the menu aren't available in the kitchen.

Advisors thought students would like more information about instructors, yet most felt it was inappropriate for them to give advice about instructors unless it was to suggest those they knew were particularly good teachers. One advisor commented:

They would like us to be able to help them by knowing where the good quality instructors are. And that's always difficult for us. It's something most of us shy away from. We might point out people who are particularly strong in the classroom and avoid discussing all those who we know are not. Plus it's a judgment most of us are unwilling to make because it's not a personal experience. ....I don't really wish to be in the business of evaluating people's classrooms.

Most advisors thought advisees develop an informal information system to get information about instructors and courses from other students. This grapevine becomes better developed the longer students are at the University. Although advisors thought more information about instructors and more detailed information about courses would be helpful to students, some believed course selection was often dictated by the time the course was offered or which courses were still open at the time of registration.

Systems Already in Place at the University of Minnesota

Although the practice is not widespread, there are settings at the University of Minnesota in which student evaluations of teaching are made available to students to assist in their selection of courses. The most recent addition to the list is the voluntary release of student evaluation of teaching summaries for faculty in the Carlson School of Management. Beginning in 1994 the Carlson School invited interested faculty to provide access in a central location of the results from evaluations of students enrolled in courses they taught in previous quarters. Those summaries are assembled in a notebook and are available to students who wish to refer to them. No reliable data exist about the frequency with which students currently review those notebooks, and no data exist about the extent to which their review has affected their course selections. Ultimately, if the student evaluation of teaching summaries do not affect course selection, considerable time and cost is spent with absolutely no outcome.

It is interesting to note, however, that there seems to be little commentary from students about the availability of the results from student evaluation of teaching. The July 1996 report The CSOM Undergraduate Education Improvement Project, carried out by six undergraduates with faculty supervision, as a result of conducting focus groups and in-person interviews and surveying 147 students in the Carlson School identified issues and concerns relative to these topics: instruction, communication, international/cultural diversity, facilities, computer use/information, and coursework/classroom time commitment. There was no mention in the report of students' perceived needs for such information although students' concerns about particular aspects of instruction (e.g., instructors' use of diverse teaching strategies and availability during office hours) suggests the possible usefulness of such questions. It seems as if the issues of particular concern to student are not well-represented in the five required questions.

Reliability and Validity of Student Evaluation of Teaching

The Office of Measurement Services provided considerable additional data for use by the subcommittee. Summarized below are several types of information that support the overall reliability and validity of student evaluation data used in the current context (i.e., for use in making personnel decisions).

In thinking about the issues posed to the subcommittee, there are at least four separate concerns and empirical bases for coming to a conclusion about the feasibility and value of making student opinion data available to students to assist in course selection:

The possible usefulness of student evaluation of teaching data in helping students select courses depends on the range and variability that currently exists in the data. Obviously, if all faculty received the same overall rating, the data would be useless, since no variability exists in student opinion about the quality of teaching. Our analysis of data from the five required questions suggests that there is limited variability in instructor means, and that overall student opinion about the quality of teaching is quite positive, as the results in Figure 7 below suggest.

Figure 7
Mean Student Evaluation of Teaching Class Averages for
Five Required Questions for Twin Cities Campus by Year a


Findings from over 20 years of research on student evaluations of teaching indicate the following general conclusions:

Student evaluations are useful as one component in the overall evaluation of instructional effectiveness, but there is little research on the usefulness of student opinions in helping other students select courses. High ratings do not guarantee effective instruction, nor do low ratings always mean ineffective instruction. Peer evaluation of instruction, as mandated by the Senate Policy on Evaluation of Teaching Contributions, self evaluations, and reports of teaching activities and instructional development are among other important information sources that should be considered in the evaluation of teaching.

For personnel purposes, a small set of global questions has been demonstrated to be more appropriate than a larger group of specific questions (e.g., instructor's rapport with students or quality of textbook). The most useful single item is "overall teaching effectiveness." Students ratings of the instructor's knowledge of the subject matter are best interpreted strictly as impressions and are of questionable validity.

Research suggests that student evaluations are reliable (i.e., average ratings are reasonably consistent and do not change significantly from one offering of a course to another), provided that neither the student composition of the course not the instructional methods have changed significantly. The response rate is important. Data from classes in which fewer than 75 percent of the students respond or in which there are fewer than 15 students may not provide reliable information.

Student evaluations are a valid measure, but are not synonymous with other measures, of teaching effectiveness. The correlation of global student evaluations with external measures of amount learned (e.g., final examination scores or course grades) is moderate at best.

A range of student ratings is expected. Typically, ratings are concentrated at the higher end of the rating scale, with a long "tail" consisting of a few low ratings. However, different distributions sometimes may occur and call attention to the possibility of subgroups of students within the course who evaluate the course quite differently.

Research indicates that average ratings may vary with characteristics (co-variates) such as class size, class format (e.g., lecture or discussion), course level, elective or required, and instructor experience. Ratings tend to be higher for smaller classes, for electives, for classes with a discussion format, and for classes taught by an instructor with more experience. These possible co-variates are important in any comparisons among instructors.

If students have access to student ratings they should avoid drawing inferences or conclusions based upon small differences in ratings. Responses are assigned integer values, but statistics are reported with decimal precision. It is tempting but not justifiable to infer differential teaching effectiveness from small differences in average ratings. To obtain statistical significance, differences may not indicate important differences in instructional effectiveness, especially at the higher end of the ratings scale.

Reliability. This section describes different aspects of the reliability of student evaluation of teaching.

One aspect of the reliability of student evaluation of teaching concerns the "response rate" in a course (i.e., the percentage of enrolled students who completed the opinion survey in the course). Results, by course level, were aggregated across three years (1993-94, 1994-95, and 1995-96) to obtain the overall picture of response rates in Table 2 below.

Table 2
Class Response Rates to the
Student Evaluation of Teaching

Course Level
Response Rate
1xxx 1xxx% 3xxx 3xxx% 5xxx 5xxx% 8xxx 8xxx%
LT 10% 18 0.3% 5 0.1% 21 0.4% 5 0.1%
10-19% 20 0.4% 19 0.4% 59 1.1% 11 0.2%
20-29% 58 1.1% 51 0.9% 64 1.2% 31 0.6%
30-39% 114 2.1% 109 2.0% 108 2.0% 43 0.8%
40-49% 188 3.5% 191 3.5% 150 2.8% 63 1.2%
50-59% 349 6.4% 366 6.8% 260 4.8% 122 2.3%
60-61% 586 10.8% 695 12.8% 589 10.9% 215 4.0%
70-79% 791 14.6% 1036 19.1% 919 17.0% 283 5.2%
80-89% 1193 22.0% 1189 21.9% 1455 26.9% 537 9.9%
GE 90% 2101 38.8% 1740 32.1% 2907 53.7% 1250 23.1%
Total 5418 5401 6532 2560

The aggregate ratios from across all courses and instructors have been remarkably consistent over the past three years. For most items, the means varied by less than one-hundredth of a scale point.

Validity. This section presents selected information on the validity of the five required student evaluation of teaching questions.

Findings from the data support research from literature and the "reasonableness" of the results from a validity perspective. Full and associate professors are rated significantly higher than teaching assistants in their knowledge of the subject matter. In general, instructors of smaller classes receive higher ratings in their overall teaching and concern for student than do those who teach very large lecture courses. Ratings of assistant professors, when followed across time, show statistically significant and meaningful differences in average ratings in the areas of overall teaching and learning.

The ratings received by faculty who have received the Morse Alumni Award are higher for all five of the required items, as the results in Table 3 indicate. The analysis was based in results from the Student Evaluation of Teaching for persons who received Morse Teaching Awards during the years 1989-1995, compared to the mean ratings of overall teaching for the general population of University of Minnesota instructors for 1995-96. The frequency distributions for recipients are more highly skewed on the high end of the scale when compared to the general population. Collapsed averages for recipients, over three years remained constant, suggesting little variance in the quality of their teaching over a three year period.

One way in which validity of the five items is demonstrated by the differences between and among ratings of groups of faculty/types of courses that are consistent with hypothesized differences relative to the quality of teaching. It was within the time limitations of the subcommittee to explore this aspect of validity on great detail, although some evidence was collected that is supportive.

First, the results in Table 3 below compare the mean ratings for faculty who have received the Morse Alumni Teaching Award with the overall means for all faculty/courses. The results indicate that the awardees received consistently higher ratings on all items.

Table 3
Averages for Five Required Items on the Student Evaluation of Teaching on the Twin Cities Campus:
Comparisons of Morse Teaching Award Recipients with the General Population
1995-96

Item
Population Mean
Morse Recipient Mean
Overall teaching
5.6
6.1
Knowledge of subject
6.3
6.6
Respect for students
6.0
6.1
Amount students learned
5.3
5.5
Physical environment
4.9
5.3

Another type of analysis compared mean ratings for the five items based on instructor level, course level and class size. Those results are summarized in Figures 8, 9 and 10 respectively.

As the results in Figure 8 suggest, the mean ratings by instructor level were quite similar across the levels from teaching assistant to full professor. The one item that indicated the largest difference was "Instructor's knowledge of the subject matter," for which the means increased as rank increased.

Figure 8
Mean Ratings of Faculty on Five Required Questions
by Instructor Level for Summer 1995 to Spring 1996

Overall
Teaching
Subject
Matter
Concern for
Students
Physical
Environment
Personal
Learning
Professor
5.61
6.41
5.88
4.91
5.3
Associate Professor
5.66
6.31
5.95
4.92
5.35
Assistant Professor
5.72
6.34
5.99
4.94
5.39
Instructor
5.64
6.19
6.03
4.95
5.28
Teaching Assistant
5.54
6.07
5.96
5.07
5.14






Results in Figure 9 for class level indicate that 1xxx level courses tended to have the lowest means, except for the item rating the classroom physical environment.

Figure 9
Class Level

Overall
Teaching
Subject
Matter
Concern for
Students
Physical
Environment
Personal
Learning
0xxx Level
5.55
6.1
5.97
5.26
5.18
1xxx Level
5.57
6.19
5.92
5.13
5.17
3xxx Level
5.61
6.24
5.92
4.96
5.28
5xxx Level
5.62
6.29
5.97
4.83
5.35
8xxx Level
5.74
6.36
6.07
4.68
5.36




Finally, results in Figure 10 relative to class size are consistent with other literature that indicates that ratings for courses with smaller class size are higher than for courses that are larger. It is important to note that neither the analysis of class level or class size took into account other correlated variables that also have been shown to correlate with level of student evaluation.

Figure 10
Class Size

Number of
Students in Class
Overall
Teaching
Subject
Matter
Concern for
Students
Physical
Environment
Personal
Learning
5 - 15
5.86
6.42
6.19
5.08
5.54
16 - 30
5.67
6.26
6.02
5.03
5.28
31 - 100
5.38
6.14
5.7
4.74
5.07
Over 100
5.38
6.22
5.56
4.83
5.03