Get the Most (out of the) Student Evaluations

It is widely documented that student surveys (or student evaluations of teaching, SET) are one of the simplest and most widely used tools to evaluate teaching performance in higher education. Among the reasons behind the wide use of these surveys are cost, the ability to easily obtain feedback from stakeholders, and the assumption that students (being the recipients of those classes) have the ability to measure teaching effectiveness. Some authors have also noted that teaching evaluations may have formative aspects and can be leveraged by instructors to adjust and improve their teaching skills.


Student Survey of Teaching at Clemson 

At Clemson, the Student Survey of Teaching was developed following the Clemson University Faculty Manual requirement that this questionnaire shall meet “minimum requirements of current research-based practices for student rating of course experience.” For students and faculty, these surveys represent a simple and anonymous path to provide faculty with constructive feedback about their teaching practices. These surveys are electronically distributed for most courses, with the exception of noncredit lab sections (course numbers ending in 1), and master’s and doctoral research hours (8910 and 9910) which can be added on request. For fall and spring, the default is the final three weeks of the term. Per the faculty manual, evaluations must be open for the final two weeks of the course to allow students time to complete evaluations. For Minimester and half-term sessions, this timeframe is adjusted.

Despite their advantages, current research indicates that these surveys often only represent student opinions of teaching capability, instead of being a valid measure of faculty instructional effectiveness and/or student learning. Moreover, the numerical results of these surveys are influenced by a number of factors that are unrelated to the instructor’s teaching effectiveness, including the type of course, the students’ own interpretation of the questions, the instructor’s attractivenesscharisma, gender identity, and race, as well as the predominant gender of their department and the incentives in place – from chocolates to grade inflation. Considering these factors, the Faculty Senate has specifically noted that evidence of bias in SET extends beyond that of the students to include those responsible for assessing effective teaching (i.e., faculty, TPR committees, and administrators). Aiming to address these aspects and minimize the effect of biases, Clemson has recently adopted a series of changes related to the way we assess teaching effectiveness. Among other changes, the 2023 Faculty Manual presents a model where the SET is complemented by two additional metrics of teaching effectiveness (Faculty manual, Chapter VI, F.2.k.i). In this scenario, faculty are strongly encouraged to consider assembling a teaching portfolio, and include not only evidence of teaching effectiveness from a variety of sources but also the context to interpret such evidence. In addition, these portfolios enable faculty members to address the comments received and reflect on a path forward to improve their teaching in subsequent courses.

It is also important to note that faculty members, depending on the nature of the course(s) they teach and the size of class they teach, should consider what supplemental teaching evaluation methods (in addition to student evaluations), are most appropriate for them to document the teaching effectiveness. While there need not be a singular standard set of teaching assessment methodologies for all courses, faculty members are encouraged to discuss with their Chair/Director/TPR committee ahead of time, making sure the selected methodologies fulfill the needs of both the faculty member and the academic unit. This is critically important for faculty members, as it allows them to define their own terms for teaching evaluation and become integral members of the process.


Challenges linked to response rates

One additional challenge linked to the use and interpretation of SET is the (typically) low response rates, limiting the statistical value of the data collected. To address this point, many universities use some variation of this calculator to estimate the minimum number of responses needed for statistical validity. The calculation considers the margin of error, the confidence value, the size of the class, and the assumed sample distribution. For example, a minimum of 80% participation would be required for statistical validity in a class of 100 students, with a 5% error, 95% confidence, and a response distribution of 50%. As a point of reference, and while it shows slight variations from one semester to another, the overall response rate at Clemson is around 55% and only 43% of the courses taught in Fall 2023 provided enough responses to consider the survey statistically significant. Although it is clear that student participation could be improved, the responses provided still have a tremendous value. However, the results need to be interpreted considering that they may not -statistically speaking-  be representative of the entire population in that class.


Making students part of the solution 

While several guidelines are discussed in the literature (mandatory participation is not recommended), these approaches should be carefully implemented, as they can further decrease the validity of the results. In this context, one of the most tangible ways to increase the validity of these assessments is to raise the traditionally low response rates by engaging with our students in the process so they understand the importance of their own responses and embrace the advantages and limitations of the process. Faculty are also strongly encouraged to consider that many of your students already carry their own perceptions about participating in SET so these changes could take time.

In general, faculty members have a number of motivational factors to increase the student response rate, including:

  1. Explain to students how course evaluations work, how are they used, who reads them, and how they can shape the improvements in each course. This will show that their participation is not only important but also an essential part of the learning process.
  2. Early deployment of Evalkit Assessment for students to provide feedback, without duress of final exams or end of semester issues.
  3. Repeated advertising to students about submitting course evaluations, giving assurances about the anonymity of the process
  4. Request to provide written feedback from students that will help the faculty members improve the course offering in future. Also, faculty can also ask students to include what they liked and what helped them in the course. This will help faculty document not only the positives, but also the negatives. This written assessment can help faculty develop their self-reflection statement.
  5. Give students time to complete the surveys in class, a strategy that may also minimize issues with off-campus access to the survey and convey the message that the evaluation is in integral part of the learning process.

In addition, faculty members could calculate the threshold for statistical validity and set class-wide incentives to be disbursed when that response rate is met. That said, approaches like this should be considered on a case-by-case scenario, as attaching any grade-related policy to the evaluations can lead to grade inflation.


Additional comments and ideas received from faculty

  • Embrace teaching evaluation in a similar manner to how you embrace feedback from journal manuscript submissions. While it cannot always be easy to read, the process of reviewing and responding can help you become a better instructor.
  • Establish a regular system to reviewing the feedback and incorporating any needed changes. TPR committees are often looking at trajectories and having more data points will help show your evolution as an instructor.
  • During your meeting with your TPR committee, talk to them about your process for developing the teaching effectiveness component of your TPR package. Document how you responded to their feedback and if those changes impacted your teaching effectiveness.
  • After closing each course, make sure that you archive all your materials and examples of student performance and feedback. It is difficult to anticipate what data you might need, and this method ensures that you have the data handy when putting together your packet. In addition, review your performance for that class using at least one of the methodologies you selected and summarized what the data showed you. Don’t try to make the data show what you want it to! It is okay to have an off semester.
  • If you are a tenured faculty member, be sure to set up appointments with your TPR committee each year. This helps the committee members to get to know you and for you to hear their suggestions regarding teaching, research, service, or leadership performance.
  • Faculty should keep in mind that the Departmental TPR Guidelines must be in alignment with the university Faculty Manual. Therefore, the TPR documents are really living documents that can change overtime and the process for making changes is not instantaneous as signatures are required from the department chair to the provost prior to acceptance of department revision requests. Be sure to periodically check for new versions of guidelines.


Need additional Information? 

If you are interested in learning more about this topic, please consider attending our Lunch and Learn (Working with Student Evaluations) on March 5, noon, Vickery 201. This event is co-organized by our Office and the Office of Teaching Effectiveness and Innovation (OTEI) and more information will be available soon.


Have questions? 

If you have any questions regarding this post, please contact:

Faculty Fellow, Best Practices in Faculty Reviews
Office of Faculty ADVANCEMent



Glenn Department of Civil Engineering



We would like to express our gratitude to Dr. Marian Kennedy, for compiling the comments and ideas received from CECAS faculty. We would also like to thank Melissa Welborn for providing swift access to data linked to student participation.