Consider This: USRIs - what are they good for?

Stephen Brookfield writes in Becoming a Critically Reflective Teacher that there are four lenses that instructors need to consider while…

Image for Post

Stephen Brookfield writes in Becoming a Critically Reflective TeacherBecoming a Critically Reflective Teacher that there are four lenses that instructors need to consider while developing their teaching: personal experience, colleagues' perspectives, theory, and students' learning experience. At the U of A, the Universal Student Ratings of Instruction (USRI) are one way we give voice to students' perspectives of their learning experience.

As instructors, we can use students' feedback to adjust our teaching strategies and course design, not necessarily to cater to students' wants but to their needs. Are our instructional design and implemented strategies being well received by students? What is their perspective on the cognitive load of the course? Are our instructional examples still relevant and current? Were our assignment guidelines and exam questions clear?

Granted, instructors need to filter this student feedback using their own teaching experience, appropriate teaching and learning literature, and the context of their course. Similar to others (Smith & Cardaciotto, 2011; Van Sickle, 2016), I have been personally surprised when a seemingly engaged active learning class results in USRIs in which students indicated they were unengaged and did not appreciate the learning experience. If the perception of the learning experience is different between students and instructors, that is worthy of critical reflection. Are our expectations too high or too low? Have the expectations changed for the constellation of courses that students were taking at the same time as the course you were teaching? Sometimes USRI results are not about you. Sometimes they reflect students' broader experience. We also know that STEM disciplines, in particular, mathematical sciences (Uttl & Smibert, 2017), are typically rated lower by students and that gender and racial bias may exist in student evaluations of teaching (Boring, 2017; Gupta, Garg, & Kumar, 2018; MacNell, Driscoll, & Hunt, 2015; Mitchell & Martin, 2018; Smith & Hawkins, 2011; Spooren, Brockx, & Mortelmans, 2013). However, some authors dispute the significance of these findings (Benton & Ryalls, 2016).

Debriefing our USRIs with a trusted colleague can help place students' ratings and feedback in perspective and also gently call us out on our own biases of the classroom experience (Chism, 1999). Our educational developers and peer consultation program at the Centre for Teaching and Learning provide good opportunities for this kind of collegial debriefing of USRIs . With something as complex as teaching and learning, USRIs are but one source of data. A truer reading of teaching effectiveness needs to be triangulated among the lenses of our colleagues' perspectives, the teaching and learning literature, and our own personal experience in addition to the USRIs we receive from our students (Brookfield, 2002).

In order for USRIs to be useful for instructors' critical reflection on their teaching, students need to consider the tone and the purpose of USRIs. Students need to write critically constructive feedback and should be aware that their feedback is used in the evaluation of the instructor by their peers. Civil constructive feedback is useful and with the right tone, is more likely to elicit change than is language that attacks and is hurtful.

Similar to how instructors need to triangulate their own critical reflection on their teaching using the lenses of students, colleagues, experience, & theory, administrators and FEC members must attend to more than one indicator of teaching effectiveness (Benton & Ryalls, 2016; Hammonds, Mariano, Ammons, & Chambers, 2017). Consideration of teaching dossiers and classroom peer observations are absolutely essential to assessing the quality of instruction. And similar to USRIs, this requires more than one data point. A few other possibilities are listed on the CTL website. The historical trends in class observations along with instructors' critical reflection of students and colleagues' perceptions in light of evidence-based theory must be included in any summative evaluation of teachers' instructional effectiveness (Boysen, Kelly, Raesly, & Casner, 2014).

Any assessment of instruction must be multifaceted because teaching and learning are multifaceted. Problems occur when USRIs are the sole metric of measuring teaching effectiveness, especially when those measures are not linked to actual student learning outcomes (Uttl, White, & Gonzalez, 2017). Certainly, we are interested in student learning and not just with students' satisfaction with their learning experience. This must be true because teaching often requires nudging students outside of their comfort zone. Lev Vygotsky advocated for teaching students in their zone of proximal development in which students are unable to complete a task on their own, but are able to solve the problem when considered within a team. This is where the best student learning occurs - when the task stretches students just beyond their current ability (Ambrose et al., 2010). This is difficult work and sometimes students need to be reminded that learning is hard messy work that requires multiple failures before getting it consistently right. And this messy difficult experience of learning may not necessarily be immediately welcomed or appreciated by students at the time they are invited to complete the Universal Student Rating of Instruction (Haave, 2017).

Neil Haave - Associate Director (Scholarship of Teaching and Learning), Centre for Teaching and Learning

Image for Post

Neil Haave has been teaching molecular cell biology and biochemistry at the Augustana Campus since 1990 where he served as Chair of Science and Associate Dean. Neil is a recipient of a McCalla Professorship and Augustana's Teaching Leadership Award and Teaching Faculty Award for the Support of Information Literacy. Neil is interested in developing scholarly approaches to teaching which includes researching our own educational practices in light of the published literature on teaching and learning in higher education. He occasionally posts to his blog, Actively Learning to Teach.

Resources

Ambrose, S. A., Bridges, M. W., DiPietro, M., Lovett, M. C., Norman, M. K., & Mayer, R. E. (2010). What kinds of practice and feedback enhance learning? In How learning works: Seven research-based principles for smart teaching (pp. 121-152). San Francisco, CA: Jossey-Bass, a Wiley imprint. Retrieved from https://ebookcentral.proquest.com/lib/ualberta/reader.action?docID=529947&ppg=147

Benton, S. L., & Ryalls, K. R. (2016). Challenging misconceptions about student ratings of instruction. IDEA Paper, 58(April), 1-22. Retrieved from http://www.ideaedu.org/Portals/0/Uploads/Documents/Challenging_Misconceptions_About_Student_Ratings_of_Instruction.pdf

Boring, A. (2017). Gender biases in student evaluations of teaching. Journal of Public Economics, 145(January), 27-41. https://doi.org/10.1016/j.jpubeco.2016.11.006

Boysen, G. A., Kelly, T. J., Raesly, H. N., & Casner, R. W. (2014). The (mis)interpretation of teaching evaluations by college faculty and administrators. Assessment & Evaluation in Higher Education, 39(6), 641-656. https://doi.org/10.1080/02602938.2013.860950

Brookfield, S. D. (2002). Using the lenses of critically reflective teaching in the community college classroom. New Directions for Community Colleges, 2002(118), 31-38. https://doi.org/10.1002/cc.61

Chism, N. V. N. (1999). Peer review of teaching. A sourcebook. Bolton, MA: Anker Publishing Company, Inc.

Gupta, A., Garg, D., & Kumar, P. (2018). Analysis of students' ratings of teaching quality to understand the role of gender and socio-economic diversity in higher education. IEEE Transactions on Education, 61(4), 319-327. https://doi.org/10.1109/TE.2018.2814599

Haave, N. (2017). Assessing teaching to empower learning. Collected Essays on Learning and Teaching, 10, iii-ix. https://doi.org/10.22329/celt.v10i0.4910

Hammonds, F., Mariano, G. J., Ammons, G., & Chambers, S. (2017). Student evaluations of teaching: improving teaching quality in higher education. Perspectives: Policy and Practice in Higher Education, 21(1), 26-33. https://doi.org/10.1080/13603108.2016.1227388

MacNell, L., Driscoll, A., & Hunt, A. N. (2015). What's in a name: Exposing gender bias in student ratings of teaching. Innovative Higher Education, 40(4), 291-303. https://doi.org/10.1007/s10755-014-9313-4

Mitchell, K. M. W., & Martin, J. (2018). Gender bias in student evaluations. PS: Political Science & Politics, 51(3), 648-652. https://doi.org/10.1017/S104909651800001X

Smith, C. V, & Cardaciotto, L. (2011). Is active learning like broccoli? Student perceptions of active learning in large lecture classes. Journal of the Scholarship of Teaching & Learning, 11(1), 53-61. Retrieved from https://josotl.indiana.edu/article/view/1808/1805

Smith, B. P., & Hawkins, B. (2011). Examining student evaluations of Black College Faculty: Does race matter ? The Journal of Negro Education, 80(2), 149-162.

Spooren, P., Brockx, B., & Mortelmans, D. (2013). On the validity of student evaluation of teaching: The state of the art. Review of Educational Research, 83(4), 598-642. https://doi.org/10.3102/0034654313496870

Uttl, B., & Smibert, D. (2017). Student evaluations of teaching: teaching quantitative courses can be hazardous to one's career. PeerJ, 5, e3299. https://doi.org/10.7717/peerj.3299

Uttl, B., White, C. A., & Gonzalez, D. W. (2017). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. https://doi.org/10.1016/j.stueduc.2016.08.007

Van Sickle, J. R. (2016). Discrepancies between student perception and achievement of learning outcomes in a flipped classroom. Journal of the Scholarship of Teaching and Learning, 16(2), 29-38. https://doi.org/10.14434/josotl.v16i2.19216