This is a fascinating article -- I recommend reading it. Go ahead, I'll wait.
To sum up, the LA Times took data from the Los Angeles Unified School District (LAUSD) on year-to-year test score changes for each student and hired an outside consultant to calulate teacher effectiveness by the "value add" of each teacher, averaged over all their students, over several years.
How this works: if Jane Doe was a 60th percentile student last year and after being in your class for a year she tested at the 65th percentile, you're given credit for helping her raise her scores, and if she drops to the 55th percentile, your effectiveness is questioned. But of course, other things could have been going on in Jane's world that year, you weren't the only factor. Therefore, the change in relative scores of individual students is averaged over a whole class (or several classes) of students, over several years, and the resulting trends are probably a valid reflection of your academic effectiveness as a teacher relative to your peers.
The Times found several interesting things:
- Fairly large variations in the effectiveness of teachers within each school, much larger than the variation between schools.
- Very little correlation between teacher experience, education, training, and effectiveness.
- Very little correlation between student race, wealth, English proficiency, or previous performance and teacher effectiveness.
- A disconnect between perception of effectiveness based on dedication and professional accomplishments and demonstrated academic ineffectiveness based on the data analysis.
In some ways the most interesting aspect of all this is that the LAUSD had the data to do this, but despite urging by their internal experts chose not to apply the data this way. This has left them open to a story like this LA Times piece, which is now the loudest voice on how this data can and could be used -- the teacher's union and the school district have lost control of the dialogue by holding back on this. This has also led to situations where individual principals are trying to estimate the results using "back of the envelope" calculations. Those principals have found that addressing the low effectiveness teachers is challenging -- coaching helps, sometimes. Figuring out what to do with this data is going to be challenging, but ignoring it isn't going to fix the problems.
For the record, this is a measure of teaching effectiveness I, as a teacher, think I can live with. It focuses on my individual impact without penalizing me for whatever experience my students had in prior years.
The article ends by bemoaning the lack of transparency on individual teacher effectiveness for parents. I do find it strange that in a world where I can get 15 reviews on a vet, a restaurant, or a dentist before choosing to use their services, this kind of data shouldn't be available to parents. What do you think?
(I found this article from a post on Eduwonk, which has a generally similar take to mine, but thinks the story should have left the teachers anonymous and questions the finding that high-poverty schools do not have a higher concentration of low-effectiveness teachers.)