Thursday, March 18, 2004

In-class response system update: the positives

I've previously discussed the negative aspects of the in-class response system I'm using this semester. While many of the negative aspects are technical problems (installation, using the software, transmitter reliability, etc.), the positive aspects are primarily pedagogical. In this post I'll summarize how I've used the system to date, and then discuss the benefits of the system.

How I've used the system:

I'm using the system primarily as a participation encouragement tool, and have tried to make it as low-stress as possible for the students. I typically ask two to four questions in each lecture using the system, and the students have around two minutes to answer each one. To prevent the system from feeling like a pop quiz, I give students one point per question they answer, regardless of whether their answer is correct. This leaves me free to ask difficult questions designed to motivate discussion without causing the students undue worry, and it also removes much of the possible motivation for cheating or collusion.

Currently I use two workarounds to help ease the pain of technical problems, both of which I derived from Randy Phillis. First, students can turn in a handwritten list of their answers in lieu of using their transmitters for four days a semester (there are ~28 lecture days in the semester). Second, to account for the possibility that some answers aren't received by the system, anyone who answers 90% of all possible questions will get 100% of the possible participation points.

I've asked a mix of question types, including questions on content that hasn't yet been introduced, basic recall questions on a topic we've just covered, and extrapolation questions designed to encourage discussion. The discussion questions have been the most interesting, and they're the ones I try to use most often. Straight recall questions have been OK, but it's hard not to say, "The answer is B, here's why, let's move on," which gets pretty dry after only a few times, and likely hinders participation on the discussion questions (e.g. "He'll just tell us the answer anyway, so why should I bother raising my hand?"). However, even the recall questions can provide useful data on student comprehension.

Questions on content that hasn't been introduced yet are probably the students' least favorite of the three (based on my limited student survey data), and I'll grant that they can be awkward if the topic is completely unfamiliar. However, when they're asked properly, pre-content questions can lead smoothly into a discussion on the upcoming topic and get the students engaged right from the start.

A unique aspect of using this system is that I have no time to prepare my response to the student data: I see it at the same time the students do. While this makes things a bit more stressful, it's fun because it's made my lectures more spontaneous.

Benefits of in-class response systems:

Probably the largest benefit is that the in-class response system provides an easy mechanism for formative assessment during class-periods. By answering a question after a challenging topic, each student is motivated to evaluate their ability to understand that topic, and by seeing the entire class's responses I get an accurate gauge of the class's comprehension.

When instructors ask an open question to a lecture, fewer than ~20% of the class will typically respond (with the percentage dropping as classes get larger; I had a citation for this but can't find it anymore). We've all been in lectures where the three "smart students" answer all the professor's questions, allowing everyone else to relax and not even ponder the question at hand. I've tried to get around this in the past by using a program to call on students randomly, but even then only one or a few students got a chance to speak up. When only a few students answer a question, the instructor necessarily gets a skewed idea of the class's knowledge on any given topic; this system remedies that by allowing every student to answer every question.

When using the in-class response system, the students can't just mentally waffle between answers; they must commit to a single choice. Thus the students have bought into one of the answers, and are more invested in the ensuing discussion about the topic. Conveniently, I also get quantitative data regarding what misconceptions students have, and thus can better guide the class discussion. If I really wanted to be mean I could go into the logs and figure out who picked any given answer and then call on them to explain it, but I wouldn't be that mean.

The answers students submit are also pseudo-anonymous, which makes the students more comfortable. While the students know that I, as the instructor, can go through and see how they've answered any given question, they never have to share their answer with their peers. This allows even the shyest student to answer a question in a large lecture, removes the fear of public humiliation for answering incorrectly, and allows students with non-conformist views to express them more openly. Anonymizing answers is also easy: students can trade transmitters for a specific question, or software settings may allow anonymity.

The system is also a good classroom management tool. The screen displays both the question text and the remaining time to answer the question, and once the timer reaches zero the program automatically graphs the student responses. Students are interested in their peers' responses, and thus as the timer approaches zero virtually all discussion stops, and then there's almost dead quiet as everyone mentally analyzes the graph at the same time. It's pretty amazing not to have to do anything special for 180 chattering students to become quiet and attentive after asking an open question.

The students also seem to love the system. There was a lot of excitement when I first mentioned the transmitters in class, and while the buzz has died down since then, the students are still enjoying the system. I've had multiple students provide unprompted compliments of the system, and I've also done a brief in-class survey asking students how much they like it. Here are the results of the survey:

How much do you like having the PRS transmitters in this class?
Extremely like them 39%
Somewhat like them 43%
Neutral 8%
Somewhat dislike them 5%
Extremely dislike them 4%


If you had a choice, would you prefer this class use or not use the PRS system?
Greatly prefer with 48%
Somewhat prefer with 33%
Neutral 11%
Somewhat prefer without 5%
Greatly prefer without 3%


Would you like other instructors to use the PRS system?
Definitely want 46%
Somewhat want 29%
Neutral 19%
Somewhat don't want 1%
Definitely don't want 6%

Data based on in-class evaluations after about 3 weeks of using the in-class response system; n ranged from 90 to 102 for the three questions. The survey was conducted via the in-class response system, and this may have biased the results somewhat. Students traded transmitters with their neighbors for anonymity.

While the evaluations aren't unanimously positive, I'm quite encouraged by them. One thing that strikes me is that the students had to pay anywhere from $15 to $45 each for their transmitters, and even with that monetary expenditure more than 80% of them would prefer the class have the transmitters, and fewer than 10% didn't want them.

As a final note, the in-class response system is also very flexible, and can be used for any number of purposes. I'm using it to encourage individual participation, but I know of other instructors who use it to promote group work (e.g., give one transmitter to each group of students) or to collect data for later analysis by the class (e.g., in a statistics class).

I'll try to post further updates after I've gotten more experience using the system.