Test-taker motivations |
James Dean Brown University of Hawai'i at Manoa |
[ p. 12 ]
[ p. 13 ]
Perhaps you could tell the students that their performances on the posttest will be included as a component of their grades in their final course, or that a certain score (or score gain) must be achieved in order to finally be exempted from English training, or that the scores will be reported to their parents, etc. In short, if you want the results of that pretest/posttest comparison to have any meaning, you probably need to make some policy change that will insure that performing well on the posttest is in the best interests of the students. [For more on other testing policy issues, see Brown, 2004].
Brown, J. D. (1996). Testing in language programs. Upper Saddle River, NJ: Prentice Hall.
Brown, J. D. (2004). Grade inflation, standardized tests, and the case for on-campus language testing. In D. Douglas (Ed.), English language testing in U.S. colleges and universities (2nd ed.) (pp. 37-56). Washington, DC: NAFSA. [ p. 14 ] Brown, J. D., & Hudson, T. (2002). Criterion-referenced language testing. Cambridge: Cambridge University Press.Messick, S. (1988). The once and future issues of validity: Assessing the meaning and consequences of measurement. In H. Wainer & H. I. Braun (Eds.), Test validity (pp. 33-45). Hillsdale, NJ: Lawrence Erlbaum Associates. Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (3rd ed.) (pp. 13-103). New York: Macmillan. Messick, S. (1996). Validity and washback in language testing. Language Testing, 13, 241-256. |
[ p. 15 ]