Today we're going to discuss a recent paper in the Educational Development literature: "Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees" published in Studies in Educational Evaluation 54 (2017) 94-106 by Angela R. Linse from The Pennsylvania State University.
What are the student ratings you ask? Well, it simply is the students' answers to a survey at the end of the class (usually at the end of the semester). Those survey are usually mostly standardized by university, mostly, because some allows instructors to add their own little question(s) on top of the compulsory material. More importantly, student ratings are used to judge faculty on their teaching capacity ... or so we've been told?!
This paper was like a revelation to me, I should say revelationS in fact! First, it was my first paper in the realm of educational development literature and it makes several eye opening points if you are unaware of this literature. Second, it treats of the student evaluations, how to interpret them, and reading this article was like taking a wisdom shower to say the least.
Let's dive into it then, shall we?
First, this article addresses the misconceptions stemming from the numerous articles available online about the student ratings. Indeed, we all have read on the Chronicle of Higher Education or Inside Higher Education some opinion about the student ratings being useless for X, Y, or Z reasons. How many of those opinions cite research? I'll bet ya almost none ... Nonetheless, there is a huge literature on student ratings out there and none of it says it's useless! As the review by Angela R. Linse mentions, faculty misconceptions about student ratings are numerous out there:
- "Student ratings are the sole measure of teaching;
- Other faculty manipulate students to achieve higher ratings;
- Students are biased against certain faculty members (and no one will notice);
- Ratings do not reflect use of effective teaching methods;
- Correlations with other variables make the ratings invalid or unreliable;
- Online response rates are too low to be representative;
- Students do not take the ratings seriously, lie, or are overly critical;
- Evaluators focus on rare or negative ratings and do not know what normal variation is acceptable."
If some of those struck a sensitive cord ... no worries, that was exactly how I felt, I could have said some of those myself! The take home message here is that if you are reading this and agreed to some of the misconceptions, there is hope if you keep reading or go and read the paper for yourself right now.
The author, Angela R. Linse, goes on telling us what student ratings are not. This is extremely useful for us to know how to interpret student ratings. Indeed, student ratings "student perception data", they are not objective, let us say it again: they are not objective (so they necessarily come with classic societal biases against women, minorities, etc. ... side point that's actually pretty big). Hence, it should seem obvious that student ratings are not faculty evaluations! This one might be hard to drive home, as Dr. Linse mentioned, because of how student ratings are usually called: "Student Evaluations of Teaching" or "Course Evaluations" or "Teaching Course Evaluations" (TCEs at UK). Finally, student ratings cannot be considered as student learning assessments either ... we grade them for that.
What are student ratings then? Well, they are extremely useful for a full part of science ... "the most research topic in higher education (Berk, 2013; Seldin, 1999) and the research literature has accumulated for more than 80 years (Cashin, 1999; Ory, 2001; Theall & Franklin, 1990, 2001)." Now we know that we don't know; indeed, it seems that this research literature fails to reach faculty and administrators while internet articles do and dictate how to use student ratings often on wrong basis! Let's see how to use student ratings now.
Amount of cheese sold in France by region (in ton).
First of all, looking at the student ratings mean is somewhat uninteresting ... if that sounds familiar to you then you're like me ... DAHH that's exactly like plotting bar graphs with mean +/- SE (or SD) a lot of space used to tell almost nothing ... plot the box plots that I can see the spread of the data! Same here with student ratings, simply comparing yourself to the mean doesn't accomplish anything ... there will always be people above and under the mean. The only way to make sense of student ratings is to look at trends and outliers one faculty at a time.
Let's take an hypothetical example involving cheese, France, the Alps, and maybe me. If you look at the graph on the right, you can see the mean value of cheese sold in France by region (alps vs. elsewhere). First, There is the possibility that sales dropped in the alps after I left in August of 2013 (yes, it would imply that my own consumption was around two tones per year ... not unrealistic, I'll leave it there). Beside that silly point, we have no idea if sales are centered around these means or if there is any type of dispersion around those means ... indeed, another possibility is that people in the alps have very steady sales around 8-10 tones of cheese per year; whereas, elsewhere there could be 20 tones sold in Paris as opposed to 2 in the south (tourists stealing our cheese and it's too hot for cheese in the south). The point is there is no way to know by just looking at the means ...
Now, I can hear people screaming that one gets good student ratings by giving out grades and one does so (look at the country grade inflation). To that Dr. Linse answer is simply correlation does not mean causation! Indeed, some found a correlation, but it's commonly between 0.2 and 0.3 (you can find anything between 0.1 and 0.5 out there though) ... that doesn't seem convincing to me ... what explains the 0.7-0.8 variance remaining?! Certainly not the student ratings so there must be something else than just giving good grades gives one good student ratings. Furthermore, all those correlational studies seem to provide evidences that "students who learn more earn higher grades and give higher ratings."
The take home message here is look at your student ratings over time, look at the trends and outliers.
There is way more in this paper than I can discuss in a blog post, but we can clearly say that it should be a must read by every faculty and administrators who will one day use student ratings one way or the other. Importantly, the author wrote the second half of her paper to give some advice on how to use student ratings with concrete examples (and there are two sections personalized to administrators and faculty evaluators respectively). Among other things, the author provides readers with a document to help them analyse their own student ratings. I cannot resist to include it below because I found it super useful! I hope this little introspection in the world of student ratings research helped you feeling less depressed and more empowered about your own student ratings, it certainly did this to me.