[2603.22529] Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos
About this article
Abstract page for arXiv paper 2603.22529: Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22529 (cs) [Submitted on 23 Mar 2026] Title:Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos Authors:Shoubin Yu, Lei Shu, Antoine Yang, Yao Fu, Srinivas Sunkara, Maria Wang, Jindong Chen, Mohit Bansal, Boqing Gong View a PDF of the paper titled Ego2Web: A Web Agent Benchmark Grounded in Egocentric Videos, by Shoubin Yu and 8 other authors View PDF Abstract:Multimodal AI agents are increasingly automating complex real-world workflows that involve online web execution. However, current web-agent benchmarks suffer from a critical limitation: they focus entirely on web-based interaction and perception, lacking grounding in the user's real-world physical surroundings. This limitation prevents evaluation in crucial scenarios, such as when an agent must use egocentric visual perception (e.g., via AR glasses) to recognize an object in the user's surroundings and then complete a related task online. To address this gap, we introduce Ego2Web, the first benchmark designed to bridge egocentric video perception and web agent execution. Ego2Web pairs real-world first-person video recordings with web tasks that require visual understanding, web task planning, and interaction in an online environment for successful completion. We utilize an automatic data-generation pipeline combined with human verification and refinement to curate well-constructed, high-quality video-task pairs across diverse web task types...