September 2011
August 31, 2011

Teacher evaluation done right

Author: Sylvia Saunders
Source: NYSUT United
Caption: Evaluators from five pilot districts spent a week in intensive training learning how to use NYSUT's Teacher Practice Rubric. Albany teacher Sara McGraw, standing, compares notes with principal Vibetta Sanders. At right is Arbor Hill principal Rosalind Gaines-Harrell. Photo by Andrew Watson.
lisa goldberg

Lisa Goldberg

When North Syracuse teacher Lisa Goldberg came to Albany in January to learn more about a pilot teacher evaluation system, she was definitely skeptical.

The 20-year social studies veteran took one look at the 30-plus page performance evaluation and literally threw up her hands.

"This will take at least 15 hours to prepare," she told the training leader. "It's totally overwhelming."

Fast forward several months and Goldberg has come full circle. She's now stepped up to be a consulting teacher/evaluator for the next phase that will expand the program throughout her district.

"Believe me, I was cynical," said Goldberg, a member of the North Syracuse Education Association. "But I found the process to be so eye-opening, so personally rewarding and worth every minute ... I truly believe this evaluation model can help everyone at all levels — from the educator who is just starting out to the experienced teacher who is on the verge of burnout."

As the state starts phasing in new Annual Professional Performance Review regulations this fall, Goldberg's experience in test-driving NYSUT's Innovation Initiative evaluation model offers a telling look at what makes a good teacher observation.

The NYSUT Teacher Practice model — one of five rubrics approved by the State Education Department — was designed and field-tested by practitioners from the six school/district labor-management teams working on NYSUT's Innovation Initiative project for the last two years. (For more on the rubric, see related story.)

For Goldberg, it had been years since she had a formal observation. She had been involved in peer coaching and ongoing evaluation, but this felt more daunting.

"I was a nervous wreck," she said. "I remember waiting outside my classroom door for my two evaluators." One was elementary principal John Cole; the other was first-grade teacher Renee McFalls. Both spent a week of intensive training this spring learning how to use the evaluation tool and collect valid and reliable evidence of good teaching.

"At first I was a little anxious about how elementary level people were going to evaluate my high school social studies classes, but it turned out to be a hidden gift, really," she said. "They both had really helpful suggestions: Good teaching is good teaching, whatever level or subject."

Perhaps the best part of the new observation process was that Goldberg had already had a pre-conference with her evaluators, sharing her lesson goals and putting her strategies in context. "The beauty was that then they could understand what they were seeing," she said.

The collaborative approach made her feel "safe" to experiment and try new things.

Evaluators sat in on her teamleadership class, which included a 10-minute activity where she had the students work in pairs to "tweet" a message as part of a bullying lesson.

"Before this, I didn't even know how many characters were in a tweet," she said. "But the kids loved it and it really got them to think on a much deeper level. The activity turned into something so much more than I expected. It felt good to take a risk and let the kids be my teacher."

After the formal observation, Goldberg wrote a self-reflective piece and got feedback in a post-conference. "It was supposed to be 30-40 minutes, and they adhered to that, which I found respectful of my time."

She noted the observation tool is structured so the feedback is not subjective. "It was evaluators collecting evidence, finding that something (like student engagement) was either present or not," she said. "It felt like a professional development meeting with peers, not an intimidating 'gotcha' evaluation."

When it came time for her second evaluation, Goldberg couldn't wait. This time it was a different class and subject area — preparation for the U.S. History Regents. "It was May and I had to prepare a lesson on civil rights, women's rights, the 1950s," she said.

Rather than a drill-and-kill session, Goldberg planned a "values clarification game" that encouraged the students to take stands on thought-provoking issues. Evaluators noted the high level of student engagement and higher-level thinking that would certainly help students with Regents essays.

"Given where I was in January, I can't believe I'm saying this," Goldberg said. "But I found the process to be affirming. We had such rich conversations about the practice of teaching ... I really didn't realize what a dearth of conversation there was. Teachers are so ridiculously busy — too often we don't carve out any time for ourselves, to reflect on what we're doing."

Goldberg does worry about how the model will "scale up," especially given time constraints and budget cuts. While she found her 15-hour prediction to be an exaggeration, she estimated her lesson planning took about two or three hours and her post-observation work to be another hour or so. "In order for this system to have integrity, both the teachers and the evaluators need the time to do this right," she said.

And she worries that "numbers can be used against you," if they are used improperly.

Upon reflection, she said the model is a little like going to a yearly appointment with your doctor:

"It is hard to get that blood test and be vulnerable. You get your numbers and if there are any issues regarding your health, they are formally named and that can be scary. Conversely, if the numbers are good, you get reinforcement for what you are currently doing — and that feels great. The same is true of our practice. It can be scary to look at our teaching and see the places that need support. That takes work, the way that lowering your cholesterol or blood sugar does. But in both cases, knowledge is power — because whether we name the issues or not they are still there. The question becomes, Do we want to get better?"