Summary:If you are planning professional development on the assessment of writing that involves students whose first language is not English, you may want to read this thoughtful article. The authors, the site director and two bilingual teacher leaders from the Hudson Valley Writing Project, describe an inquiry which focused on the question, "What if the writing rubrics we use don't make sense to our bilingual students or their teachers?" By engaging in and studying a multi-faceted process of translating a rubric from English into Spanish, the team developed a rich approach to teacher reflection on student writing, assessment and writing instruction. Specific suggestions for planning are provided.
Our team—two bilingual teacher-consultants (Martha and Fabiola) and a writing project site director (Tom)—began with a measurement conundrum. The Hudson Valley Writing Project (HVWP) at SUNY New Paltz had been involved in a research project designed to help us understand the impact of our bilingual writing program for migrant youth, nearly all of whom spoke Spanish as a first language. Consistent with the instructional philosophy of the program, and based on our understanding of literacy research, we believed that young writers could develop fluency in both English and Spanish and we encouraged the students to consider their lives and language as resources for regular daily writing (de la Luz Reyes and Halcón; Moll, Amanti, Neff, and Gonzalez). We urged students to code-switch and use Spanish words that held particular meanings that need not be translated, for instance, mami (mommy). When writing personal narratives, we encouraged bilingual writers to use the language that matched the moment of the memory. For example, if a student’s memory about a first bicycle occurred when the writer was exclusively a Spanish speaker, then writing the narrative in Spanish may be more appropriate than writing in English. In sum, we did not believe that writing in a first language would deter learning in a second language and that some of the writing processes and satisfactions could transfer to subsequent learning (Pérez).
When our study began in 2006, we had rich data indicating that the program was important for the students. Not only had we observed the classes, but we had also collected students’ drawings, surveys, and sample writing from their portfolios. Additionally, we interviewed teachers, students, and program administrators. Nonetheless, we knew little about the majority of the students’ pre- and post-writing samples, which had been independently scored at a national scoring conference. Students who elected to write in English scored poorly, averaging between 1 and 2 on the holistic, six-point Analytic Writing Continuum (AWC) scale. Complicating matters further, we had no results for those who opted to write in Spanish (the majority) since their writing samples went un-scored. The English writing results were not flattering to our program and we knew nothing about what happened when bilingual students opted to write in their first language, something that we encouraged them to do.
We were disappointed that the Spanish writing had not been scored; however, we understood that accurate scoring required readers who were both knowledgeable about assessment and fluent in Spanish. We began to wonder what would have happened if we had translated the original writing samples from Spanish into English and then had them scored. Quickly, though, we surmised that many of the nuances of communication and culture would be lost in translation and lost on the readers looking at the writing through the lens of a rubric designed for the scoring of writing produced by “L1” American students whose first language was English. We came to believe that if we translated the NWP’s [Analytic Writing Continuum] into Spanish before scoring any of the Spanish writing, we might end up with more valid scores.