[2603.00465] Optimizing In-Context Demonstrations for LLM-based Automated Grading
About this article
Abstract page for arXiv paper 2603.00465: Optimizing In-Context Demonstrations for LLM-based Automated Grading
Computer Science > Artificial Intelligence arXiv:2603.00465 (cs) [Submitted on 28 Feb 2026] Title:Optimizing In-Context Demonstrations for LLM-based Automated Grading Authors:Yucheng Chu, Hang Li, Kaiqi Yang, Yasemin Copur-Gencturk, Kevin Haudek, Joseph Krajcik, Jiliang Tang View a PDF of the paper titled Optimizing In-Context Demonstrations for LLM-based Automated Grading, by Yucheng Chu and 6 other authors View PDF HTML (experimental) Abstract:Automated assessment of open-ended student responses is a critical capability for scaling personalized feedback in education. While large language models (LLMs) have shown promise in grading tasks via in-context learning (ICL), their reliability is heavily dependent on the selection of few-shot exemplars and the construction of high-quality rationales. Standard retrieval methods typically select examples based on semantic similarity, which often fails to capture subtle decision boundaries required for rubric adherence. Furthermore, manually crafting the expert rationales needed to guide these models can be a significant bottleneck. To address these limitations, we introduce GUIDE (Grading Using Iteratively Designed Exemplars), a framework that reframes exemplar selection and refinement in automated grading as a boundary-focused optimization problem. GUIDE operates on a continuous loop of selection and refinement, employing novel contrastive operators to identify "boundary pairs" that are semantically similar but possess different g...