[2604.04482] Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

[2604.04482] Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.04482: Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models

Computer Science > Artificial Intelligence arXiv:2604.04482 (cs) [Submitted on 6 Apr 2026] Title:Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models Authors:Dominik Glandorf, Fares Fawzi, Tanja Käser View a PDF of the paper titled Scalable and Explainable Learner-Video Interaction Prediction using Multimodal Large Language Models, by Dominik Glandorf and 2 other authors View PDF HTML (experimental) Abstract:Learners' use of video controls in educational videos provides implicit signals of cognitive processing and instructional design quality, yet the lack of scalable and explainable predictive models limits instructors' ability to anticipate such behavior before deployment. We propose a scalable, interpretable pipeline for predicting population-level watching, pausing, skipping, and rewinding behavior as proxies for cognitive load from video content alone. Our approach leverages multimodal large language models (MLLMs) to compute embeddings of short video segments and trains a neural classifier to identify temporally fine-grained interaction peaks. Drawing from multimedia learning theory on instructional design for optimal cognitive load, we code features of the video segments using GPT-5 and employ them as a basis for interpreting model predictions via concept activation vectors. We evaluate our pipeline on 77 million video control events from 66 online courses. Our findings demonstrate that classifiers based on MLLM embedd...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying
Llms

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying

Anthropic’s AI experiment showed Claude independently handled 186 deals worth over $4,000, but results varied by model capability, with u...

AI Tools & Products · 5 min ·
CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models
Llms

CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models

CoreWeave Inc. (NASDAQ:CRWV) is one of the best technology stocks to buy for the next decade. On April 20, CoreWeave announced a multi-ye...

AI Tools & Products · 2 min ·
[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models
Llms

[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

Abstract page for arXiv paper 2604.01650: AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

arXiv - AI · 4 min ·
[2602.11931] AdaptEvolve: Improving Efficiency of Evolutionary AI Agents through Adaptive Model Selection
Llms

[2602.11931] AdaptEvolve: Improving Efficiency of Evolutionary AI Agents through Adaptive Model Selection

Abstract page for arXiv paper 2602.11931: AdaptEvolve: Improving Efficiency of Evolutionary AI Agents through Adaptive Model Selection

arXiv - AI · 3 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime