Reading complexity judgments: Episode 3Gholam Reza Haji Pour NezhadTehran University |
Question 4: How do complexity ratings by students differ from those made by teachers? |
Table 1. Student and Teacher Judgments of Item General Complexity Based on Factuality/Inferentiality Judgment Responses
[ p. 6 ]
STEM: The fat hens and chickens in the box beyond the fence were what the fox looked at.
RESPONSE: Because the fox was hungry, it stopped when it saw them.
Question 5: How do stem-response combinations influence perceived complexity order rankings? |
Table 2: Most/Least Complex Items Based on Statement-Restatement Combinations
Figure 1. Estimated marginal means of staterestate items for student respondents
[ p. 7 ]
"non-expert informants' judgments of complexity are by no means random, but are a very systematic manifestation of their evaluation of factors producing text and test item difficulty." |
[ p. 8 ]
Alderson (1993) observed differences among teachers and students in determining item difficulty, and concluded that students' and teachers' judgments of item complexity are not reliable sources of information in determining item difficulty. However, he ignored one aspect of this diversity of judgments: consistency. The present study focused on judgments to see whether there is any consistency in the manner students and/or teachers judge complexity, and found meaningful patterns of consistency in the judgments of each group. However, it ignored another important aspect of complexity judgments: why we observe consistency. I believe a fruitful line of further research would be to ask why there is consistency in complexity judgments, what factors give rise to this consistency, and whether the same amount of consistency in judgments is present in standardized proficiency tests.THIS ARTICLE | |||
Abstract | Background | Method | Results |
Conclusion | References | Appendix 1 | Appendix 2 |
[ p. 9 ]