[2604.04482] Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models
About this article
Abstract page for arXiv paper 2604.04482: Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models
Computer Science > Artificial Intelligence arXiv:2604.04482 (cs) [Submitted on 6 Apr 2026] Title:Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models Authors:Dominik Glandorf, Fares Fawzi, Tanja Käser View a PDF of the paper titled Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models, by Dominik Glandorf and 2 other authors View PDF HTML (experimental) Abstract:Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embedd...