In Teaching, Sources of Error Trump “Quality”

Some of my fellow Teach For America alumni recently had a piece published in the Star Tribune supporting the bill that would replace seniority with an undeveloped teacher evaluation system as the primary tool for guiding teacher layoffs. The column repeats some of the usual claims around seniority and teacher quality and warrants a dissenting opinion.

It begins by correctly identifying teachers as the most important in-school factor for student success. What doesn't get mentioned is the relative size of in-school versus out-of-school effects. We'll come back to this.

The column cites Eden Prairie's recent layoff of 50 teachers as an example of seniority's unfairness. I'm willing to bet this is the annual "pink slipping" of all probationary (untenured) teachers that most schools go through. It's an annual exercise where, due to budgetary uncertainty, schools fire most or all of their probationary teachers, to be rehired months later when the budget situation is clearer. Here's the thing: The rehiring isn't done by seniority. A school can rehire second-year teachers while leaving third-year teachers out. Teachers that schools don't want can be laid off, never rehired, and never given the chance to gain seniority.

Then the article rehashes the argument that seniority-guided layoffs of tenured teachers mean that we sometimes lay off a Teacher of the Year instead of a less effective, more senior colleague. This is indeed a problem, but it's not one that's fixed by systems (like the one in development) that place significant weight on test scores not intended for teacher evaluation. States and cities that experimented with so-called "value added" measurements in teacher evaluation ended up laying off Teachers of the Year instead of less effective colleagues who got luckier in the test score lottery.

This is where the in-school/out-of-school distinction comes in. Any given student's test score may be an accurate indicator of his or her performance. That performance is a combination of teacher effectiveness, plus all the other in-school factors, plus all the out-of-school factors (which have a bigger impact than in-school factors).

Given a sample size of only twenty to thirty data points, it's impossible to distinguish a given teacher's effectiveness from all the other factors. The most dangerous part of "value added" measurements is that they claim to measure teacher quality when they actually don't. We shouldn't trade one flawed system for another.

Posted in Education | Related Topics: Education Administration  Teacher Assessment 

Thanks for participating! Commenting on this conversation is now closed.