"I think a better question to ask is if anyone finds them [student evaluations] useful or informative -- everyone who said yes to that would be instantly outed as an administrative mole, so we could haul 'em off for a little tarring and feathering. But otherwise, I don't know anyone who likes them."PZ: you now know your first person who likes student evaluations. And I don't just like them, I love them. I read each and every one of my personal evaluations, and I tweak my course based on them.
I view my student evaluations as one of the best measures of my teaching quality. My students are the reason why I'm in the classroom; by the end of the semester they've had to endure more than 140 hours with me, and they almost certainly know how I teach better than any of my peers. Each member of my tenure committee (the only other people on campus who observe my teaching) spends less than two hours each year in my classroom, and they always notify me when they're coming in to observe, so I know to be on my best behavior. I trust 140+ hours of observation over less than two hours of observation (of my best lessons) any day. My peers also already know the subject I'm teaching, so they're not even my intended audience.
Now certainly, as many have pointed out, student evaluations do have their flaws. Easy instructors will likely get higher numerical feedback, as grading hard will necessarily create some student enemies (though easy instructors may also be reviewed negatively for wasting students' time), and there will always be students who hate an instructor for personal or idiosyncratic reasons (assuming these are only an anomalous few, they can usually be safely discounted).
The largest flaw I see in most student evaluations is that the stock evaluation forms provided by campuses contain pointless questions. As PZ points out, the campus forms will often ask questions that are only cursorily related to teaching (e.g., about the physical environment), and even the questions they do ask are relatively meaningless (e.g., "How would you rate the exams in this course, on a scale of 1-5?"). Our campus has these stock evaluation forms, and I'll agree with PZ: I largely ignore them. They tell me very little.
The evaluations I really care about, and the ones I'm anxious to open at the end of every semester, are the evaluations that I write and give to the students myself. I change these evaluations every semester, and every semester I ask only a few targeted questions about specific course elements that I want feedback on. If I introduced a new item to the course, I'll often ask students how they liked it; see my evaluation data on knowledge checks and the in-class response system for examples of this type of data. If the students hated the new idea, I try something else; if they loved it, I probably keep it.
I always have a lot of space for written answers (complete with detailed short answer questions), and I introduce the evaluation by telling students I'll read every evaluation over the break, and will use it to redesign the course in the future. I receive no campus support for these additional evaluations (I enter and analyze all the data myself), though I do often share the results with my supervisors and peers.
One of my favorite things to ask students on my lab evaluations is which labs were, in their opinion, the best and worst labs of the semester (providing a list of the labs to help jog their memory). I enter all these rankings into a spreadsheet, and compute which labs were the most liked, and which were the least liked; I then use this feedback to redesign the lab the following semester. For instance, last semester a yeast fermentation lab received relatively poor ratings overall, so this semester that yeast fermentation lab is being replaced with a new lab (still on yeast fermentation, but addressing a separate issue in a different manner).
So, in short, if you're complaining that your student evaluations aren't providing you with the feedback you'd like, then the problem is likely with the evaluations, not the students.
Bitch Ph.D. goes further than PZ and implies that students have no right to criticize her teaching, since they don't understand her pedagogical approach. Here's a sampling:
"Is anyone else bothered that our primary feedback on our work comes from children? ... In academia, the people who observe and evaluate you on a day-to-day basis are distracted 18-year olds who don't understand what your job actually is."I think most students have a very good grasp of what our job in the classroom is: we're there to help them learn the content our course is intended to cover. If we do a good job of helping students learn, they will recognize that ("Hey, now I know that worm is actually a nematode, and it's more closely related to arthropods than annelids, and ..."), and conversely, they can also tell if the class was largely a waste of time. Obviously every class will contain some students who think the class is a waste of time, but you know there's a problem when a large fraction of the class feels that way.
"I swear to god a major part of the reason we all feel so alienated and anxious is because we don't get feedback or praise from people who count on any kind of regular basis."
An analogy for student evaluations of professors is that I don't have to fully understand the stressful life of a chef, or the complicated planning that goes into creating a menu, to know whether I like the food that comes out of the kitchen. Students will not necessarily understand the pedagogy underlying every decision that an instructor makes, but they can judge whether they like the decisions their instructors make, and whether the decisions their instructors make facilitate their learning.
Just because I work hard and spend hours on my teaching doesn't mean that my teaching is necessarily of high quality; the only way I can determine whether I'm teaching well is to listen to the people who are in the best position to judge my work: my students.
No comments:
Post a Comment