Based on Professor Lowes' comments from last week, I tried to keep an eye for weaknesses in the research articles I read this week.
In the Haavind article, she looked at the relationship of low instructor contribution to high collaborative events amongst students. The study started with seven high collaboration classes, but only three classes fit the model of being low in instructor posting and high collaboration events which was sustained through the course, so only those classes were researched.
This is a small data set to start with, and then to throw out four classes and only look at the remaining three is really small. It also seems like there would be value in trying to understand where the contradictions came from. Class 7 had extremely high instructor contribution *and* sustained high collaboration. Why?
In the Zucker report, he looked at the impact of two approaches to promoting student-student interaction and collaboration. An experiment group was told to assign double point value towards grade based on student to student interaction, and was compared to a control group with the normal point assignment.
In this study, after four weeks into the fifteen week class, the teachers hadn't doubled the points for student to student interaction. At that point when they were asked to do so, they did, but, I think the pattern had already been established at that point and it ruins the study.
Zucker also administered a survey and reported a very high student response rate of 82%. It made me wonder how the survey was presented. Who did the students think the survey was coming from - the school, the teacher, a researcher? Just curious. Also, a survey pet peeve of mine is not having "other" as an option, or better yet and open text field. Without one of those options, and by making every question mandatory, he effectively forced the survey takers to agree with him, leaving no room for a response that he didn't consider.
I also noticed that the Rice article references a Cavanaugh study - I thought it was the one we read last week, but no. The one from last week covered distance ed from 1999-2004. The study Rice references covered distance ed from 1980-1998. Really?! Did they look at correspondence classes? Wow.
Subscribe to:
Post Comments (Atom)

Yes indeed, you really picked up on the issues in these articles--including the big one, which is that correlations alone are not enough. You also need to understand why, and that often involves additional research.
ReplyDelete