Criterion-referenced item analysis
|
James Dean Brown University of Hawai'i at Manoa |
[ p. 18 ]
The fourth step in the above list – item analysis – is different for NRTs and CRTs. In the previous column, I explained how that step works for NRTs. In this column, I will explain item analysis for CRTs. Recall that the basic purpose of CRTs is to measure the amount (or percent) of material in a course or program of study that students know (or can do), usually for purposes of making diagnostic, progress, or achievement decisions (for much more on this topic, see Brown, 1995a, 1996, 1999; Brown & Hudson, 2002). Two item statistics are often used in the item analysis of such criterion-referenced tests: the difference index and the B-index."the difference index shows the gain, or difference in performance, on each item between the pretest and posttest." |
[ p. 19 ]
Notice in Screen 1 that I have calculated the DI for Item 1 using my spreadsheet program. I did so by typing in the item numbers, then lining up my posttest and pretest item facilities as shown. Then, in cell F2, I typed =B2-D2 and hit the ENTER key. In other words, I subtracted the IFposttest minus the IFpretest and got .70 as my result. Once the calculation in cell F2 was completed, I blocked and copied that cell (using CONTROL C to do so) and pasted that into cells F3 to F11 (by blocking them out and hitting CONTROL V). That copying yielded the other DI values. The DI tells me how much the students are improving between the pretest and posttest on each item (and by extension, on the related curriculum objective). Like the item discrimination statistic discussed in the previous column, the higher the value of the DI, the better. Indeed a value of 1.00 is a perfect difference index. Thus, in Screen 1, items 1, 3, and 7-10 are much better related to the curriculum than are items 2, and 4-6 because they have higher values. Items 4-6 are not fitting because they reflect only small gains (i.e., their values are very low); item 2, which has a negative value, indicates that, somehow, during the course, 80% of the students who started out knowing this item unlearned it by the end of the course."the B-index shows how well each item is contributing to the pass/fail decisions that are often made with CRTs." |
[ p. 20 ]
Notice in Screen 2 that I have calculated the B-index for Item 1 using my spreadsheet program. I arranged my data by typing labels for Student ID and the item numbers across the first row. Then, I typed in all the students' names, as well as 1s for items they answered correctly and 0s for items they answered incorrectly. I next calculated the total score for each student and rank-ordered the students from highest to lowest scores. Finally, to make it easy to visualize the passing and failing groups, I put a blank row between those who passed (i.e., scored above the 60% cut-point in this case) and those who failed.[ p. 21 ]
"Generally, the difference index will tell you how well each item fits the objectives of your curriculum, and the B-index will tell you how well each item is contributing to the pass/fail decision that you must make at whatever cut-point you are using." |
[ p. 22 ]
The other definition is that the criterion is the standard of performance (or cut-point for decision making) that is expected for passing the test/course. Thus criterion-referenced testing would be used to assess whether students pass or fail at a certain criterion level (or cut-point). This definition fits very well with the B-index, which indicates how well each item is contributing to the pass/fail decision that you must make at whatever cut-point your are using.[ p. 23 ]
Brown, J. D. (1995b). The elements of language curriculum: A systematic approach to program development. New York: Heinle & Heinle Publishers.
Where to Submit Questions: Please submit questions for this column to the following e-mail or snail-mail addresses: brownj@hawaii.edu JD Brown, Department of Second Language Studies University of Hawaii at Manoa 1890 East-West Road, Honolulu, HI 96822 USA |
[ p. 24 ]