[2603.01104] Egocentric Co-Pilot: Web-Native Smart-Glasses Agents for Assistive Egocentric AI
About this article
Abstract page for arXiv paper 2603.01104: Egocentric Co-Pilot: Web-Native Smart-Glasses Agents for Assistive Egocentric AI
Computer Science > Human-Computer Interaction arXiv:2603.01104 (cs) [Submitted on 1 Mar 2026] Title:Egocentric Co-Pilot: Web-Native Smart-Glasses Agents for Assistive Egocentric AI Authors:Sicheng Yang, Yukai Huang, Weitong Cai, Shitong Sun, Fengyi Fang, You He, Yiqiao Xie, Jiankang Deng, Hang Zhang, Jifei Song, Zhensong Zhang View a PDF of the paper titled Egocentric Co-Pilot: Web-Native Smart-Glasses Agents for Assistive Egocentric AI, by Sicheng Yang and 10 other authors View PDF HTML (experimental) Abstract:What if accessing the web did not require a screen, a stable desk, or even free hands? For people navigating crowded cities, living with low vision, or experiencing cognitive overload, smart glasses coupled with AI agents could turn the web into an always-on assistive layer over daily life. We present Egocentric Co-Pilot, a web-native neuro-symbolic framework that runs on smart glasses and uses a Large Language Model (LLM) to orchestrate a toolbox of perception, reasoning, and web tools. An egocentric reasoning core combines Temporal Chain-of-Thought with Hierarchical Context Compression to support long-horizon question answering and decision support over continuous first-person video, far beyond a single model's context window. Additionally, a lightweight multimodal intent layer maps noisy speech and gaze into structured commands. We further implement and evaluate a cloud-native WebRTC pipeline integrating streaming speech, video, and control messages into a unifie...