Weighing Student Evaluations: Rubric Anyone?
by Anthony Halderman

“Yeah, students’ evaluations are only one part of a peer evaluation, but it’s a very important part,” says a veteran instructor and member of Cuesta College’s union executive board.

In slight contrast, a former Cal Poly University colleague of mine suggests that student evaluations can only assesses a minor portion of an instructor’s classroom performance. This former Cal Poly professor feels a classroom visitation by another instructor provides better assessment of an instructor’s classroom performance. In fact, this one particular professor’s approach to student evaluations seems to parallel the title of an article in NEA Journal, Thought & Action Fall 2010, “Less-Than-Perfect Judges: Evaluating Student Evaluations.”

Student evaluations, first introduced in the 1920s and then subsequently more regularly adopted in the 1960s, afford us some insights into an instructor’s overall classroom performance. However, a paradox inherently exists between a university/college's responsibility to satisfy its students, and a university/college's responsibility to educate those very pupils. A mere happy student doesn’t necessarily equate a sufficiently educated one.

So then, what role do student evaluations play in an overall peer evaluation? Well, actually both positions hold some truth. According to Dr. Julian Crocker, San Luis Obispo County Superintendent of Schools, many districts are re-evaluating and emphasizing the process of student evaluations in the peer evaluation procedure. Along with test scores and measurable student performance, student feedback/evaluations can provide some critical insight into an instructor’s performance.

However, even though many of us, if not all of us, agree that student evaluations of an instructor play a role in the whole peer evaluation process, few of us would agree on the exactly how much weight student evaluations should carry. I even asked a dean at Cuesta Community College if we have a rubric clearly indicating how student evaluations are weighted in the peer evaluation process. Of course, this dean doesn’t have one. Yet, a fair, reasonable, and accurate peer evaluation should clearly indicate the weighting of test scores, measurable student performance, course materials, professional development, and of course student evaluations.

The “current,” general approach to student evaluations seems to allow the peer evaluators to weigh the importance of the student evaluations using their own personal discretion. In fact, I personally know one peer evaluator who upon seeing weak student evaluations told the evaluee, “These are some of the lowest student evaluations I’ve seen in a while.” This one particular evaluator continued to focus on the “low” student evaluations, and used the student evaluations as evidence that the evaluee needed to improve her teaching performance.

Yet, in true contradiction, this very same evaluator who upon seeing very respectable student evaluations with a different instructor said, “So. They’re just students. I’m the professional.” This particular peer evaluator went on to completely dismiss very respectable student evaluations. Of course, such a dismissal in a peer evaluation completely surprised of the evaluee. How can student evaluations carry significant weight in one peer evaluation, but little weight in another? Where’s the rubric for assessing student evals.?

This vacillating, inconsistency, and at times just plain cherry picking, has prompted me to reflect on the meaning, value, and importance of student evaluations/feedback. Perhaps my former Cal Poly colleague would agree that student evaluations should carry only a modest significance because many university students have already demonstrated effective study skills. The success and retention rates at universities aren’t as low as some English as a Second Language programs in community colleges.

However, pending one’s discipline, I agree with the Cuesta College professor, and executive board member, that student evaluations are critical. If the ESL population at your community college is similar to other ESL programs, you’re probably aware that the ESL population has low rates of success and retention. The ESL Program at Cuesta College is among the lowest in California Community Colleges.

In fact, I addressed low ESL rates of success and retention in a CATESOL News 2008 article titled, “More Edutainment, Please.” In this article, I suggest one way to improve success and retention is making second language acquisition as fun and exciting as possible. Lecture-heavy courses will surely put many ESL students to sleep. Of course, the ESL student body needs to learn and be tested, but they also need to enjoy their courses. Student evaluations provide critical insights into how their courses are progressing: the higher the student evaluations, the higher the success and retention.

As a result of me broaching this topic with the CATESOL Board of Directors, the current community college level chair (a position I held 2010-2012), other board members, and I will craft a tentative rubric and avail it to other programs. In the most general terms, the rubric weighs equally four categories of the peer evaluation: 25% class materials, 25% classroom visitation, 25% professional development, and 25% student evaluations. Of course, its current state requires more details (TBA.)

Evidenced by my personal and limited example, different colleges and universities as well as different disciplines assess student evaluations varyingly. This difference can reflect a legitimate reason, but unfortunately it can also mirror an evaluator’s personal discretion and bias. A rubric aims to eliminate the latter. To further compound the issue, union contracts and bargaining units will also play a role in exactly how educational institutions both craft and execute these rubrics. Unions will need to negotiate these rubrics for campus wide use. But individual divisions may possess a certain degree of latitude for implementing their own rubric. Divisions, however, can never override the union’s position on a topic. One former union president of a community college mentioned amending the evaluation form and process shouldn’t be too difficult if members fully support it. You’ll need to contact your union representative and review your contract.

Dr. Julian Crocker correctly remarks that many schools are in fact re-evaluating and emphasizing the process of student evaluations in the peer evaluation procedure. But to what degree requires further exploration and dialogue. So the next time you have a scheduled peer evaluation, be sure to ask your evaluator, administrator, union representative, and/or colleague, “Exactly how are the student evaluations weighted? Where’s the rubric?”

Anthony Halderman is a college and university English instuctor.

Home | ESL |Writing & Publishing| Halderman Photographs | Real International Stories | Music | Textbooks | anthonyhalderman.com | Halderman Award | E-mail

divider.gif (1693 bytes)

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

Home | ESL |Writing & Publishing| Halderman Photographs | Real International Stories | Music | Textbooks | anthonyhalderman.com | Halderman Award | E-mail

online education, composition in ESL, writing for ESL students, ESL writing and composition. CATESOL, CATESOL News, ESL newsletter, newsletters, Inside English, Anthony Halderman, anthonyhalderman.com, ESL, English teacher, English instructor, real stories, educational tool, education, English instruction, ESL information, Cuesta College, San Luis Obispo, California, reading textbook, Cal Poly University, Allan Hancock College, learn english, online web page, on-line quizzes, Halderman faculty page, Kendall Hunt publishing, real international stories, ESL lessons, english lesson, interesting textbook, ESL articles, ESL English research, international perspective, student-generated stories, students essays, student compositions, real people, unique stories and experiences, Halderman photographs, SLO, photos of california, photos of the central coast, travel, traveling,