[2604.04088] Embedding Enhancement via Fine-Tuned Language Models for Learner-Item Cognitive Modeling
About this article
Abstract page for arXiv paper 2604.04088: Embedding Enhancement via Fine-Tuned Language Models for Learner-Item Cognitive Modeling
Computer Science > Computation and Language arXiv:2604.04088 (cs) [Submitted on 5 Apr 2026] Title:Embedding Enhancement via Fine-Tuned Language Models for Learner-Item Cognitive Modeling Authors:Yuanhao Liu, Zihan Zhou, Kaiying Wu, Shuo Liu, Yiyang Huang, Jiajun Guo, Aimin Zhou, Hong Qian View a PDF of the paper titled Embedding Enhancement via Fine-Tuned Language Models for Learner-Item Cognitive Modeling, by Yuanhao Liu and 7 other authors View PDF HTML (experimental) Abstract:Learner-item cognitive modeling plays a central role in the web-based online intelligent education system by enabling cognitive diagnosis (CD) across diverse online educational scenarios. Although ID embedding remains the mainstream approach in cognitive modeling due to its effectiveness and flexibility, recent advances in language models (LMs) have introduced new possibilities for incorporating rich semantic representations to enhance CD performance. This highlights the need for a comprehensive analysis of how LMs enhance embeddings through semantic integration across mainstream CD tasks. This paper identifies two key challenges in fully leveraging LMs in existing work: Misalignment between the training objectives of LMs and CD models creates a distribution gap in feature spaces; A unified framework is essential for integrating textual embeddings across varied CD tasks while preserving the strengths of existing cognitive modeling paradigms to ensure the robustness of embedding enhancement. To addr...