[2603.22289] MERIT: Memory-Enhanced Retrieval for Interpretable Knowledge Tracing
About this article
Abstract page for arXiv paper 2603.22289: MERIT: Memory-Enhanced Retrieval for Interpretable Knowledge Tracing
Computer Science > Computation and Language arXiv:2603.22289 (cs) [Submitted on 3 Mar 2026] Title:MERIT: Memory-Enhanced Retrieval for Interpretable Knowledge Tracing Authors:Runze Li, Kedi Chen, Guwei Feng, Mo Yu, Jun Wang, Wei Zhang View a PDF of the paper titled MERIT: Memory-Enhanced Retrieval for Interpretable Knowledge Tracing, by Runze Li and 5 other authors View PDF HTML (experimental) Abstract:Knowledge Tracing (KT) models students' evolving knowledge states to predict future performance, serving as a foundation for personalized education. While traditional deep learning models achieve high accuracy, they often lack interpretability. Large Language Models (LLMs) offer strong reasoning capabilities but struggle with limited context windows and hallucinations. Furthermore, existing LLM-based methods typically require expensive fine-tuning, limiting scalability and adaptability to new data. We propose MERIT (Memory-Enhanced Retrieval for Interpretable Knowledge Tracing), a training-free framework combining frozen LLM reasoning with structured pedagogical memory. Rather than updating parameters, MERIT transforms raw interaction logs into an interpretable memory bank. The framework uses semantic denoising to categorize students into latent cognitive schemas and constructs a paradigm bank where representative error patterns are analyzed offline to generate explicit Chain-of-Thought (CoT) rationales. During inference, a hierarchical routing mechanism retrieves relevant c...