Is Cross-Marking A Way To Increase Rater Reliability?

Author:

Year-Number: 2018-Volume 6 Issue 3
Yayımlanma Tarihi: null
Language : English
Konu : null
Number of pages: 331-346
Mendeley EndNote Alıntı Yap

Abstract

Keywords

Abstract

Most of the error correction research has focused on whether teachers should correct errors in student writing, how they should do it and how deep it should be. Recent research, thus, has mostly focused on pedagogical merits of error correction and its possible benefits for student learning. However, in some particular contexts where graders make multiple scorings on the same paper, not much has been investigated to see if those corrections manipulate other graders or whether the writing teachers’ corrections on students’ papers have a positive or negative impact on the reliability of the scores when raters see the corrections of the other graders on the papers they mark. This study intended to explore whether corrections made by the graders affect the scores of colleagues who are scoring the same papers second time to gain more accurate results and to ensure the rating reliability. To do that, 12 writing teachers graded 20 student essays written by intermediate level English learners. The participants were first asked to grade 10 papers without doing error correction and those papers were re-scored after 3 weeks by the same graders, inter-rater and intra-rater reliability computations were carried out for this set of papers to see the actual reliability levels of the raters under normal circumstances. In the second stage, the graders were asked to score the other 10 papers, but this time they also made error corrections on the papers and after 3 weeks, the same teachers graded the same papers that were corrected by their pair graders. The scores assigned each time to these papers by the same raters, were compared statistically and the effect of error correction was investigated on their scores. In conclusion, the results revealed that error marking and grader comment on writing papers may have a negative effect on raters’ intra-rater reliability levels whereas it could have a positive effect on raters’ inter-rater reliability levels when a pool of raters grade the same papers.

Keywords