[2603.19280] From Feature-Based Models to Generative AI: Validity Evidence for Constructed Response Scoring
About this article
Abstract page for arXiv paper 2603.19280: From Feature-Based Models to Generative AI: Validity Evidence for Constructed Response Scoring
Computer Science > Computation and Language arXiv:2603.19280 (cs) [Submitted on 1 Mar 2026] Title:From Feature-Based Models to Generative AI: Validity Evidence for Constructed Response Scoring Authors:Jodi M. Casabianca, Daniel F. McCaffrey, Matthew S. Johnson, Naim Alper, Vladimir Zubenko View a PDF of the paper titled From Feature-Based Models to Generative AI: Validity Evidence for Constructed Response Scoring, by Jodi M. Casabianca and 4 other authors View PDF Abstract:The rapid advancements in large language models and generative artificial intelligence (AI) capabilities are making their broad application in the high-stakes testing context more likely. Use of generative AI in the scoring of constructed responses is particularly appealing because it reduces the effort required for handcrafting features in traditional AI scoring and might even outperform those methods. The purpose of this paper is to highlight the differences in the feature-based and generative AI applications in constructed response scoring systems and propose a set of best practices for the collection of validity evidence to support the use and interpretation of constructed response scores from scoring systems using generative AI. We compare the validity evidence needed in scoring systems using human ratings, feature-based natural language processing AI scoring engines, and generative AI. The evidence needed in the generative AI context is more extensive than in the feature-based scoring context becau...