Skip to main content
JALT Testing & Evaluation SIG

Main navigation

  • Home
  • About TEVAL SIG
    • TEVAL's Mission
    • Officers
    • Constitution
  • Publications
    • Shiken Journal
    • Shiken Current Issue
    • Statistics Corner: Questions and Answers about Language Testing Statistics (book)
    • Shiken Back Issues
    • Submission Guidelines
    • TEVAL News
    • Shiken Issues Pre May 2012
  • News
    • TEVAL Talk Time
    • Grants Information
    • Introduction to jMetrik
    • Shiken Journal Call for Papers
    • Statistics Corner
    • Ethical Issues in Language Testing - Dr. Isbell
    • Dr. Isbell Pansig Plenary Slides
  • Events
  • Join
  • Contact Us
    • Facebook
    • email
User account menu
  • Log in

Breadcrumb

  1. Home

Statistics Corner: How do we calculate rater/coder agreement and Cohen's Kappa?

Article appearing in Shiken 16.2 (Nov 2012) pp. 30-36.

Author: James Dean Brown
University of Hawai'i at Manoa

Question:
I am working on a study in which two raters coded answers to 6 questions about study abroad attitudes/experience for 170 Japanese university students. The coding was done according to a rubric in which there were 4-8 possible responses per question. Since most -- if not all -- of the data is categorical, I have heard that Cohen's Kappa is the most common way of ascertaining inter-rater agreement. What is the best way to actually calculate that? Since more and more people are moving away from single-rater assessments to multi-rater assessments, this question should be relevant to Shiken Research Bulletin readers.

Answer:
In order to address your question, I will have to describe the agreement coefficient as well as the Kappa coefficient. I will do so with a simple example, then with the more complex data that you have in your study.

Download full article (PDF)

PanSIG Conference, Chiba, May 16-18, 2025

PanSIG
Kanda University of International Studies

SHIKEN

A Journal of Language Testing and Evaluation in Japan

Site editors

  • Reset your password
RSS feed

JALT is the Japan Association for Language Teaching, a nonprofit organization dedicated to the improvement of language teaching and learning. The TEVAL SIG is a Special Interest Group of researchers within JALT who are interested in testing and the evaluation of language learning outcomes.

Powered by Drupal