At the end of every semester, students often fill out evaluations rating their teachers. It’s a report card of sorts, a way for instructors to know if they can improve the way they taught a subject matter or were particularly helpful in explaining a tough concept.
But it’s likely not the fairest way to evaluate a professor, according to some new research by French economist Anne Boring. She, along with Philip Stark and Kellie Ottoboni at the University of California, Berkeley, found that teacher evaluations can also reveal a gender bias many students might not even be aware they’re acting on.
Boring started her research at her home university, Sciences Po in Paris, where she found that male French students were rating male instructors more highly on their evaluations than their female instructors. The researchers didn’t want to make a final conclusion based on this result, however — after all, as NPR education reporter Anya Kamenetz points out, “Is it bias? Or were the male instructors, maybe, actually, on average, better teachers? (It’s science; we have to ask the uncomfortable questions.)”
The American version of the experiment twisted the French version in a creative way: Students took an online course taught by either a male or a female instructor. What the students didn’t know was that in half the cases, their male teacher was actually a female, and if they had the female teacher, she was, in fact, a male. And here’s where gender bias was clearly shown: Female students rated “male” instructors more highly, despite the fact that the instructor engaged in the same teaching methods as their actual female selves. It’s similar to the results of a 2014 study, which also found that students tended to rate their instructors more highly when they thought that instructor was a man.
Stark — whose previous research in this area has shown that of the small population of students who complete their evaluations, most are either very unhappy or extremely happy with the instructor and class, thereby skewing results — says trying to create a new evaluation that lacks bias is “hopeless.” Boring agrees, arguing that while teacher evaluations can be valuable, “they are too biased to be used in a high-stakes way as a measure of teacher effectiveness,” Kamenetz writes.
Also, the researchers didn’t mince words with the title of their paper, offering perhaps the most concise, candid judgment of teacher evaluations of all: “Student Evaluations of Teaching (Mostly) Do Not Measure Teaching Effectiveness.” Kind of says it all.