Reliability of surveys |
James Dean Brown University of Hawai'i at Manoa |
[ p. 18 ]
Quantitative researchers also refer to the consistency concept as dependability for criterion-referenced assessments, but they call it reliability for norm-referenced assessments. Criterion-referenced dependability typically involves one or more of the following three strategies (these three will not be defined here because they are not directly germane to the discussion at hand): threshold-loss agreement, squared-error loss, or domain-score dependability. Norm-referenced reliability usually involves one or more of following three strategies: test-retest reliability, equivalent forms reliability, or internal consistency reliability (these three strategies will be defined below). Like qualitative researchers, quantitative researchers should stress the importance of multiple sources of information, especially in making important decisions about students' lives. (For much more information, including how to calculate the various reliability and dependability estimates, see Brown, 1996).[ p. 18 ]
Internal consistency reliability estimates come in many forms, e.g., the split-half adjusted (using the Spearman-Brown prophecy formula), the Kuder-Richardson formulas 20 and 21 (aka, K-R20 and K-R21), and Cronbach alpha. The most commonly reported of these are the K-R20 and the Cronbach alpha. Either one provides a sound estimate of reliability. However, the K-R20 is applicable only when questions are scored in a binary manner (i.e., right or wrong). Cronbach alpha has the advantage of being applicable when questions are small scales in their own right like the Likert scale (i.e., 1 2 3 4 5 type) questions found on many questionnaires. Hence, Cronbach alpha is most often the reliability estimate of choice for survey research.[ p. 20 ]
Looking at question quality, I note that the questionnaire was developed solely out of the researcher's experience and was a first attempt, so I would not expect it to be extremely reliable.[ p. 19 ]
[ p. 21 ]