IRT-based classification analysis of an English language reading proficiency subtest

Elif Kaya*, Stefan O'Grady, Ilker Kalender

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)
8 Downloads (Pure)

Abstract

Language proficiency testing serves an important function of classifying examinees into different categories of ability. However, misclassification is to some extent inevitable and may have important consequences for stakeholders. Recent research suggests that classification efficacy may be enhanced substantially using computerized adaptive testing (CAT). Using real data simulations, the current study investigated the classification performance of CAT on the reading section of an English language proficiency test and made comparisons with the paper-based version of the same test. Classification analysis was carried out to estimate classification accuracy (CA) and classification consistency (CC) by applying different locations and numbers of cutoff points. The results showed that classification was suitable when a single cutoff score was used, particularly for high- and low-ability test takers. Classification performance declined significantly when multiple cutoff points were simultaneously employed. Content analysis also raised important questions about construct coverage in CAT. The results highlight the potential for CAT to serve classification purposes and outline avenues for further research.

Original languageEnglish
Pages (from-to)541-566
Number of pages26
JournalLanguage Testing
Volume39
Issue number4
Early online date27 Jan 2022
DOIs
Publication statusPublished - 1 Oct 2022

Keywords

  • Classification accuracy
  • Classification consistency
  • Computerized adaptive testing
  • Language proficiency
  • Rudner approach

Fingerprint

Dive into the research topics of 'IRT-based classification analysis of an English language reading proficiency subtest'. Together they form a unique fingerprint.

Cite this