[2604.03631] Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors

[2604.03631] Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.03631: Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors

Computer Science > Artificial Intelligence arXiv:2604.03631 (cs) [Submitted on 4 Apr 2026] Title:Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors Authors:Likai Peng, Shihui Feng View a PDF of the paper titled Single-agent vs. Multi-agents for Automated Video Analysis of On-Screen Collaborative Learning Behaviors, by Likai Peng and Shihui Feng View PDF HTML (experimental) Abstract:On-screen learning behavior provides valuable insights into how students seek, use, and create information during learning. Analyzing on-screen behavioral engagement is essential for capturing students' cognitive and collaborative processes. The recent development of Vision Language Models (VLMs) offers new opportunities to automate the labor-intensive manual coding often required for multimodal video data analysis. In this study, we compared the performance of both leading closed-source VLMs (Claude-3.7-Sonnet, GPT-4.1) and open-source VLM (Qwen2.5-VL-72B) in single- and multi-agent settings for automated coding of screen recordings in collaborative learning contexts based on the ICAP framework. In particular, we proposed and compared two multi-agent frameworks: 1) a three-agent workflow multi-agent system (MAS) that segments screen videos by scene and detects on-screen behaviors using cursor-informed VLM prompting with evidence-based verification; 2) an autonomous-decision MAS inspired by ReAct that iteratively interleaves reasoning, tool-l...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

I tested the same prompt across multiple AI models… the differences surprised me

I’ve been experimenting with different AI models lately (ChatGPT, Claude, etc.), and I tried something simple: Using the exact same promp...

Reddit - Artificial Intelligence · 1 min ·
Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying
Llms

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying

Anthropic’s AI experiment showed Claude independently handled 186 deals worth over $4,000, but results varied by model capability, with u...

AI Tools & Products · 5 min ·
CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models
Llms

CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models

CoreWeave Inc. (NASDAQ:CRWV) is one of the best technology stocks to buy for the next decade. On April 20, CoreWeave announced a multi-ye...

AI Tools & Products · 2 min ·
[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models
Llms

[2604.01650] AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

Abstract page for arXiv paper 2604.01650: AromaGen: Interactive Generation of Rich Olfactory Experiences with Multimodal Language Models

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime