[2603.24961] Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math
About this article
Abstract page for arXiv paper 2603.24961: Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math
Computer Science > Artificial Intelligence arXiv:2603.24961 (cs) [Submitted on 26 Mar 2026] Title:Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math Authors:Dingjie Song, Tianlong Xu, Yi-Fan Zhang, Hang Li, Zhiling Yan, Xing Fan, Haoyang Li, Lichao Sun, Qingsong Wen View a PDF of the paper titled Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math, by Dingjie Song and 8 other authors View PDF HTML (experimental) Abstract:Assessing student handwritten scratchwork is crucial for personalized educational feedback but presents unique challenges due to diverse handwriting, complex layouts, and varied problem-solving approaches. Existing educational NLP primarily focuses on textual responses and neglects the complexity and multimodality inherent in authentic handwritten scratchwork. Current multimodal large language models (MLLMs) excel at visual reasoning but typically adopt an "examinee perspective", prioritizing generating correct answers rather than diagnosing student errors. To bridge these gaps, we introduce ScratchMath, a novel benchmark specifically designed for explaining and classifying errors in authentic handwritten mathematics scratchwork. Our dataset comprises 1,720 mathematics samples from Chinese primary and middle school students, supporting two key tasks: Error Cause Explanation (ECE) and Error Cause Classification (ECC), with seven defined error types. The dataset is meticulously annotate...