About the Student Experience Survey

About the Student Experience Survey

Students have distinctive, valuable insights to offer about the teaching and learning experience. At UO, student feedback is elicited through Student Experience Surveys (SES), which provide opportunities for both formative and summative feedback; the SES feedback is also one of several sources of evidence used in teaching evaluation (alongside peer reviews, instructor reflections, and unit-specific sources).Survey questions focus on pedagogical practices that research indicates are most significant for student learning. These practices also align with UO's definition of teaching quality: teaching that is professional, inclusive, engaged, and research-informed. Surveys do not generate numerical ratings for comparison against departmental or university means.

TEP welcomes meeting with faculty and GEs who wish to debrief survey feedback and think together about how they might like to apply select feedback to the course. 

This page contains information on:

When you are ready, check out our pages on actions to take before the surveys open, while the surveys are open, and after they are closed.

Midway Student Experience Surveys

The midway Student Experience Survey (M-SES) is centrally administered to all courses with at least five registered students. No M-SES reports are administered during summer terms. 

Feedback is only provided to the instructor. Therefore, the survey is only for the improvement of teaching and not evaluative purposes. Unless you choose to share it with others, no one else can see student feedback in your M-SES. 

The M-SES is NOT advertised to students through Canvas, so you will need to remind students to complete it.

TEP recommends that every instructor collect and reflect on midway feedback. The M-SES will run automatically for courses, or you could use your own method if you want more flexibility in questions asked of students. See our page on collecting and using student feedback for more methods to collect feedback and designing feedback questions.

 

End-of-term Student Experience Surveys

The End-of-term Student Experience Survey (E-SES) is centrally administered to all courses with at least five registered students. Results are provided to the instructor and evaluators after grades are submitted.

Similar to the M-SES, the E-SES is intended to provide feedback on specific teaching practices and how they impact student perceptions of their learning. Responses to pilot surveys conducted in 2018-19 suggest students more frequently provide specific comments about teaching practices and rarely make personal comments about instructors (down from 21% to 1.5% of comments) compared to old student evaluations of teaching.

Individual instructors might want to contextualize E-SES results by addressing them in personal statements and in the new Instructor Reflection survey. The Instructor Reflection is available from the beginning of week 10 until the first Friday of the subsequent term. The instructor reflection also allows for reflection on how instructors are implementing inclusive, engaged, and research-led teaching practices.

 

Student Experience Survey Questions

Both the M-SES and E-SES ask students about ten specific teaching elements, listed below, drawn from research on the teaching practices that are significant to student learning. 

  • The inclusiveness of this course
  • Support from the instructor
  • Feedback from the instructor
  • The clarity of assignment instructions and grading
  • The use of active learning practices
  • Instructor communication in this course
  • The organization of this course
  • The relevance of the course content
  • The assignments or projects in this course
  • The accessibility of this course

The students are asked six questions about their experience in the course

  1. Students are asked if each element is beneficial to their learning, neutral, or needs improvement to help their learning.
  2. Students are asked to choose the one element that is most helpful to their learning and to describe what about that element helped their learning.
  3. Students are asked to choose the one element that could most use some improvement to help them learn and to describe what specific change would help their learning.
  4. Students are asked to estimate how many hours per week they spend on the course outside of class
  5. Students are asked how many times they interacted with the instructor outside of class
  6. The sixth question is open-ended for students to share their thoughts. It differs slightly between the M-SES and E-SES.
    • For the midway student experience survey, students are given the opportunity to say to their instructor what they can do to best support their learning for the remainder of the course.
    • For the end-of-term student experience survey, students are given the opportunity share anything else about their learning experience in the course.

If you’d like, you can review the exact text of the E-SES questions.

Practices and Teaching Excellence

The practices in the SES are aligned with UO’s definition of teaching excellence. When you review student feedback through the SES dashboard, the feedback is organized by pillar. This grouping is below.

Professional Teaching
Instructor communication in this course
The organization of this course
The assignments or projects in this course
Inclusive Teaching
The inclusiveness of this course
The relevance of the course content
The accessibility of this course
Research-Informed Teaching
Support from the instructor
Feedback from the instructor
The clarity of assignment instructions and grading
The use of active learning practices

 

Student Experience Survey Schedules

These deadlines are for quarters during the academic year. For the schedules for summer classes and semester-courses in the Law school, see the Office of the Provost’s page on the SES.

For the midway-student experience survey

  • Week 2, Wednesday: Faculty can begin adding questions to the M-SES
  • Week 4, Monday at 8am: M-SES opens to students, faculty can no longer add questions
  • Week 4, Friday at 6pm: M-SES closes for students
  • Week 5, Monday at Noon: M-SES responses are available to faculty

For the end-of-term student experience survey

  • Week 7, Wednesday: Faculty can begin adding questions to the E-SES
  • Week 9, Wednesday at 8am: E-SES opens to students, faculty can no longer add questions
  • Week 10, Monday: Instructor Reflections open to faculty
  • Finals Week, Friday at 6pm: E-SES closes for students
  • Day after grades are due, Noon: E-SES responses are available to faculty
  • 1 month after responses are available: Deadline for faculty to flag student responses for redaction
  • Friday of Week 1 of following term: Instructor reflections on previous term close

 

What Led UO to Develop the SES?

Prior System at UO

From 2007 – 2019, UO administered numerical student evaluations of teaching (SETs) administered numerical student evaluations of teaching (SETs). These asked students to rank faculty on general criteria and invited open-ended descriptions of a course's and an instructor's "strengths and areas for possible improvement". In principle, these (and all SETs) have a dual purpose. 

  1. Faculty can use SET results to help them identify areas of their teaching that need attention and improvement; that is, they have a formative purpose.
  2. SETs are used to inform evaluations of a faculty member’s teaching as part of decisions about tenure and promotion, contract renewal, and merit raises; a a summative purpose

The latter purpose, especially, relies on the assumption that SETs are a valid measure of teaching effectiveness (assumed to be related to student learning). The research literature on SETs is extensive and stretches back nearly 100 years, but over that time little consensus has emerged about whether there is in fact a correlation between SET ratings and student learning, or even how one should measure student learning.

Research on SETs

Many—but not all—studies show a modest positive correlation between SET results and student learning (Spooren et al., 2013;  Benton & Cashin, 2012).  But recent work, including a careful meta-analysis of previous results (Uttl et al., 2016), indicates that there is no correlation between SET ratings and student learning after controlling for sample size and publication bias. Research specific to SETs use at UO (Ancell & Wu, 2017) also concluded that suggest that "SET scores are not a valid measure of teaching quality at the UO" (pp. 1).

Other problems arise as well. For example, there are indications that students often do not interpret questions and terminology on SETs in the same way faculty do (Lauer, 2012) so care must be taken with wording of questions and interpretation of results. Persistent questions also remain (see, for example Stark & Freishtat (2014)) regarding students’ ability to assess teaching effectiveness, the use of SETs to compare faculty in the absence of information about the spread of scores within a relevant group of faculty, and whether student response rates on non-mandatory SETs accurately reflect the true distribution of student opinion.  In addition, there is evidence that SET scores vary depending on class size, the level of the class, the discipline, and prior preparation of the students.

Most disturbing, though, are results indicating that SETs show bias. Research on SETs find bias in race (Smith, 2007; Smith and Hawkins, 2011), and in ethnicity (Smith & Anderson, 2005), with Black and Latino faculty receiving lower scores on SETs than their white colleagues. Similar research finds gender bias, with women receiving lower scores than male colleagues (Mengel, 2018; McNell, 2015; Boring et al., 2016; Boring, 2017) and gendered language in written comments (Mitchell, 2018; Ray, 2018. Boring, 2017). The research by Ancell and Wu (2017) into SETs at UO found evidence that "female instructors receive systematically lower course evaluation scores while their students achieve more than their peers taught by male instructors in future courses" (pp. 38).

While there is debate about the validity, utility, and fairness of SETs, there is agreement in the research literature that if they are used at all, SETs should be only one of several tools used to assess teaching (Benton & Cashin, 2012; Lauer, 2012; Berk, 2005).  Peer reviews, self-evaluations, administrator reviews, student interviews, and alumni ratings are alternative strategies that can be combined to create a more representative picture of a faculty member’s teaching.  Institutions such as the Association of American Universities (Dennin et al, 2018; The Association of American Universities, n.d.) and the Royal Academy of Engineering (n.d.) have argued that it is time for universities’ ideals regarding teaching excellence to align with their policies.

Current System at UO

With knowledge of these shortcomings, in 2017 the University Senate adopted a resolution to form a committee to overhaul UO's teaching evaluation system.  The Senate Continuous Improvement and Evaluation of Teaching (CIET) committee (established in April 2019 legislation) now oversees implementation of Senate legislation related to teaching evaluation. 

The University of Oregon has developed a holistic new teaching evaluation system that does more than simply replace problematic evaluation instruments. The new system provides the path to define, develop, evaluate, and reward teaching excellence. The goals of the new system are to ensure teaching evaluation is fair and transparent, is conducted against criteria aligned with the unit’s definition of teaching excellence, and includes input from students through the SES, peer reviews, and the faculty themselves.

In use since 2019, the SES system has shown to address issues seen in the old UO system. Pilot testing of the SES saw a reduction of personal comments about the instructor by a factor of around 10 — from about 20% of all student comments with the old system and down to less than 2% with the SES. References to instructor personality traits (such as bossy, sweet, funny, or patient) and perceived intelligence (such as bright, intellect, genius or smart) were also reduced in the new student experience survey​ compared to the prior system. Gender stereotyped language were also reduced with the SES. For example, under the prior UO system male faculty were twice as likely to be described as a "genius" compared to female faculty. By focusing student reflections in the SES on specific elements of teaching, the use of this unhelpful descriptor was reduced overall by about 60% and its use was equalized between male and female instructors.

If discriminatory, obscene, or demeaning comments are still submitted by students, the CIET and the Office of Provost also developed a protocol to redact these comments from SES results that are made available for viewing by unit heads and personnel committees. After SES responses are available, faculty can flag individual comments as being discriminatory, obscene, or demeaning (any of the three), and the CIET committee will review the comments for redaction. In the first five years of the SES, faculty flagged 118 comments as discriminatory, obscene, or demeaning; the CIET committee redacted 62 of those comments. You can read more about the redaction process on the SES Comment Redaction Page.

References