Teachers union official questions LA Times reliance on ‘value-added’ metric used to evaluate teachers
For the second time in 12 months, the Los Angeles Times published a comprehensive evaluation of teachers within the Los Angeles Unified School District last Sunday using the “value-added model.” But an expert on such evaluations with the American Federation of Teachers tells The American Independent since that metric only predicts potential rather than solid progress of a student, it is an unfair standard by which to gauge teachers’ efficacy.
The Times says the “value-added” model “projects a child’s future performance by using past scores — in this case, on math and English tests. That projection is then compared to the student’s actual results. The difference is the ‘value’ that the teacher added or subtracted.” The scores of 11,500 teachers were made available to the public on the Times’ website. Each scorecard is accompanied with a comment section where teachers can explain their results. Parents and students are also welcomed to describe their experiences with that educator and school. An internal search engine allows parents to easily look up teachers by their last name or the school in which the teachers are employed.
Groups representing teachers and the school district, the second largest in the country, were critical of the paper’s decision to publish similar results last year. While the school district attempted to dissuade the Times this second time, union groups have largely stayed silent. George Jackson, a public affairs official with American Federation of Teachers (AFT), said, “there was back and forth last year [between the union and the Los Angeles Times], but things are kind of quiet now.”
While the newspaper was careful to explain the limitations of the statistically generated results, it’s FAQ section regarding the topic reads, “Research has repeatedly found that teachers are the single most important school-related factor in a child’s education. Until now, parents have had no objective information about the effectiveness of their child’s teacher.”
To Rob Weil, director of field programs and educational issues at AFT, the way the Times apportions the soundness of the research is problematic. While he doesn’t disagree with the paper’s definition of “value-added,” he cautions,“value-added isn’t like statistics. If we said we want an average of numbers from the same data set, everyone does it the same way,” he told The American Independent. “But value-added is a developing statistical tool,” and therefore inconsistent. He points to a study by analysts at the University of Colorado at Boulder that ran the same data set the Times used only to come up with substantially different results. The crux of the analysts’ findings are posted below:
For reading outcomes, our findings included the following:
• Only 46.4% of teachers would retain the same effectiveness rating under both models, 8.1% of those teachers identified as effective under our alternative model are identified as ineffective in the L.A. Times specification, and 12.6% of those identified as ineffective under the alternative model are identified as effective by the L.A. Times model.
For math outcomes, our findings included the following:
• Only 60.8% of teachers would retain the same effectiveness rating, 1.4% of those teachers identified as effective under the alternative model are identified as ineffective in the L.A. Times model, and 2.7% would go from a rating of ineffective under the alternative model to effective under the L.A. Times model.
Weil doesn’t dispute the potential of the “value-added” approach, but objects to the swiftness with which media outlets and some reformers have embraced the metric. He stresses that “value-added” is a mechanism in measuring potential rather than an evaluation of any real progress made by the student.
What complicates matters is the projection is based on the student’s performance at the previous grade level. For “value-added” to depict an accurate picture of how much a student learned controlling for issues like income, family stability, neighborhood violence and other factors, the publishers of the study would have to assume there is something called a “random assignment” in how the students are placed in classrooms.
Randomization in statistics is essential to its reliability. But often parents intervene in the enrollment decisions schools make for their children, preferring certain teachers over others. While Weil said parents have every right to feel a certain teacher is more likely to better educate their children, the statistical soundness of the “value-added” mechanism is undermined.
For example, in most public policy polling results, the sampling error stated is within three to five percentage points. The “value-added” sampling error is in a much higher range — 10 to 20 percentage points. To statisticians, that range makes “value-added” useful only in separating the highest performers from the lowest performers; the results in the middle are “statistical noise.”
The American Independent asked whether an evaluation device known as “simple gain,” which compares how knowledgeable a student was going into the classroom to how much the student learned, is more effective. Weil said “simple-gain is even more prone to volatility, because it cannot control for out-of-classroom impediments to the student’s ability to learn effectively. The improvements to “value-added” assessments means those factors can be controlled for, but those refinements are down the road.
As to why the Los Angeles Times is pursuing the topic, Weil offers two points. With respect to the results the paper published, he says, “they’re not trying to be inaccurate, it’s just the nature of mathematics.” But he also speculates media outlets that rely on “value-added” models to force teachers to adjudicate their talents in the public eye do so “because they make money off of it.”