For the last three years, ever since I audited a few afternoons of this workshop in 2011 and then was fortunate enough to be added to the modeling instruction list serv, I have given my ninth-grade students the (Simplified Language) FCI as a pre-test and a post-test at the beginning and end of my force and motion units (currently based mostly on Science Curriculum Inc's Force, Motion, and Energy). Each year, my ninth grade students' pre-test scores have averaged 27-28%, completely in line with the first year physics high school students in the Hestenes 1992 paper. And each year, my gains in percent correct answers have grown a bit, but from a non-awesome baseline -- 10% to 11% to 14%. I do much better with some students, but not with all of them. I want to be the kind of physics teacher who has average gains of 35-40%. (Figure: "Traditional" vs. "Interactive Engagement" FCI gains from Hake (1998) vs. my data for individual students this year and overall averages for the past three years.)
This year, based on some suggestions from the modeling listserv, I added the Lawson Classroom Test of Scientific Reasoning to the mix (Lawson, 1978; Lawson et al., 2000). Although I find the Information Processing discussing in the Hestenes 1979 article way too rational and mechanical to be a very useful view of my very human students, I do think the use of the Piagetian framework for student's reasoning skills should be helpful. When I analyzed my Lawson data, I had two pleasant surprises and one unpleasant one. Can you tell what they were?
On the plus side, my students come to me further along the concrete-formal transition than the average ninth graders collected by O'Donnell (2011). They also make good progress in moving along that transition over the course of the year. On the minus side, there's no correlation between their reasoning skills and their gains on the FCI. I am not, as a teacher, taking advantage of their scientific thinking abilities to help them do a better job of constructing solid Newtonian physics concepts.
That's why I'm here. I want to do better. That first graph, while a bit depressing right now, offers hope that better is attainable. (Although I always wonder how valid the benchmark lines are for 9th grade physics.) And that Hestenes 1979 article tells me that, "The teacher can no more develop effective new curricula and teaching techniques on their own than he can discover ab initio the basic principles of the science he teaches. If the profession of teaching is ever to transcend the folklore state ... it must be guided and supported by a program of profound educational research." I'm hoping Modeling Instruction will be that guide and support for me.
Works cited (No, my blog posts don't usually have works cited lists, but this one is borrowed and modified from the student improvement data write-up I did last week, so I already had the references handy.)
Hake, R. R. (1998) Interactive-engagement vs traditional methods: A six-thousand- student survey of mechanics test data for introductory physics courses. American Journal of Physics, 66 (64). http://dx.doi.org/10.1119/1.18809.
Hestenes, D. (1979) Wherefore a science of teaching? The Physics Teacher, April 1979, 235-242.
Hestenes, D., Wells, M., and Swackhamer, G. (1992) Force Concept Inventory. The Physics Teacher, 30, 141-158.
Lawson, A. E. (1978). The development and validation of a Classroom Test of Formal Reasoning. Journal of Research in Science Teaching, 15(1):11–24.
Lawson, A. E., Clark, B., Cramer-Meldrum, E., Falconer, K. A., Sequist, J., and Kwon, Y.-J. (2000). Development of scientific reasoning in college biology: Do two levels of general hypothesis-testing skills exist? Journal of Research in Science Teaching, 37(1):81–101.
O'Donnell, J. R. (2011) Creation of national norms for scientific thinking skills using the Classroom Test of Scientific Reasoning (Unpublished masters thesis). Winona State University, Winona, Minnesota. Downloaded from modeling.asu.edu October 15, 2013.