U.S. Secretary of Education Arne Duncan has tempered his initial enthusiasm for publishing teacher effectiveness ratings based on test scores.
Last summer, after the Los Angeles Times published "value added" ratings of 6,000 elementary school teachers on a scale ranging from "least effective" to "most effective," Duncan applauded the paper for its action.
"What's there to hide?" Duncan said, as reported in an article headlined "U.S. schools chief endorses release of teacher data."
"In education, we've been scared to talk about success," he was quoted as saying.
But since then, he has gone out of his way to say that teacher effectiveness ratings should be based on far more than just test scores. They should also include parent feedback and portfolios of students' work, he said.
Just last week, in an open letter to teachers to mark "Teacher Appreciation Week," he said teacher evaluations should be based on "meaningful observations and input from your peers, as well as a sophisticated assessment that measures individual student growth, creativity, and critical thinking."
And he now avoids saying whether he thinks teacher-effectiveness ratings should be published by media outlets.
"There is no one right answer on that," he said in response to a question from California Watch at the Education Writers Association convention in New Orleans last month. "That should not be determined by us, that should be determined at the local level."
California Watch sent him a follow-up question to clarify his comments. Did he mean, we asked, that decisions about publication should be left: 1. to a local school board? or 2. to a local media outlet?
His press office sent us this response on Duncan's behalf, making no mention of media publication.
Local school districts in real partnerships and collaborations with their teachers must decide for themselves how to share this information, how to put it in context, and how to improve teaching and learning.
Duncan has also steered clear of responding directly to criticism made by some leading statisticians about the problems inherent in the complex and evolving "value added" methodology, including those outlined by a panel convened by the National Research Council.
At the Education Writers Association conference, he said that "the perfect should not be the enemy of the good," implying that just because a methodology isn't perfect should not be a reason to avoid using it at all.
However, in response to a follow-up question from California Watch, Duncan's office avoided any specific endorsement of "value added" approaches, but responded as follows:
Arne would not endorse the use of inaccurate teacher evaluations systems. He believes that "good" evaluation systems will be rigorous and methodologically sound. But he recognizes that "good" evaluations can be improved to better reflect teacher effectiveness and report better information to students and the public.
In his comments in New Orleans, Duncan seemed most upset that teachers had not been provided with information on their impact on students' test scores by their school district prior to the Times publishing its ratings. The Times calculated the rankings with the help of Rand Corporation researcher Richard Buddin.
"What bothers me most about the LA situation is that teachers were denied access to this information at a local level," Duncan said. "That is untenable to me. For them not to have access to it is absolutely nonsensical."
The controversy over when and which teacher ratings should be published is likely to continue after the Times published another in its ongoing "Grading the Teachers" series on Sunday. The paper published the names and "value-added" ratings of an additional 5,500 elementary school teachers, along with new ratings of the 6,000 names published last summer.
The ratings are far more nuanced than those published last August, and include additional variables. Remarkably, the Times shows how teachers would rank based on four different statistical models, including the one used by the Times.
The paper also emphasized that its teacher effectiveness rating should be used as "just one gauge of a teacher's overall performance." However, the paper does not include any other measures of teacher effectiveness.
The paper published the ratings despite a written request to consider dropping the story. The unusual letter was signed jointly by LA Unified's new superintendent, John Deasy, as well as the presidents of the district's school board, the Los Angeles Chamber of Commerce and the United Way.
The letter, sent to LA Times publisher Eddie Hartenstein, explained that LA Unified had developed its own scores on teachers' impact on their students' test scores, and would share those with teachers next month. Instead of publishing them in the Times, the group wrote, "individual evaluations, in our opinion, should be private conversations that are intended to help professionals improve their performance in the classroom."
In rejecting the request, Times editor Russ Stanton said publishing the database "is a service to the people of Los Angeles.
The issue of whether to publish individual teacher rankings is likely to get even more national attention if a current court battle in New York City is resolved in the school district's favor. Unlike in Los Angeles, the school district there seems eager to release the still-secret ratings to media organizations that have requested them. It says is compelled to do so under the state's Freedom of Information Law.
View full post on California Watch: K–12