[2506.17896] EgoWorld: Translating Exocentric View to Egocentric View using Rich Exocentric Observations
About this article
Abstract page for arXiv paper 2506.17896: EgoWorld: Translating Exocentric View to Egocentric View using Rich Exocentric Observations
Computer Science > Computer Vision and Pattern Recognition arXiv:2506.17896 (cs) [Submitted on 22 Jun 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:EgoWorld: Translating Exocentric View to Egocentric View using Rich Exocentric Observations Authors:Junho Park, Andrew Sangwoo Ye, Taein Kwon View a PDF of the paper titled EgoWorld: Translating Exocentric View to Egocentric View using Rich Exocentric Observations, by Junho Park and 2 other authors View PDF HTML (experimental) Abstract:Egocentric vision is essential for both human and machine visual understanding, particularly in capturing the detailed hand-object interactions needed for manipulation tasks. Translating third-person views into first-person views significantly benefits augmented reality (AR), virtual reality (VR) and robotics applications. However, current exocentric-to-egocentric translation methods are limited by their dependence on 2D cues, synchronized multi-view settings, and unrealistic assumptions such as the necessity of an initial egocentric frame and relative camera poses during inference. To overcome these challenges, we introduce EgoWorld, a novel framework that reconstructs an egocentric view from rich exocentric observations, including point clouds, 3D hand poses, and textual descriptions. Our approach reconstructs a point cloud from estimated exocentric depth maps, reprojects it into the egocentric perspective, and then applies diffusion model to produce dense, semantically coherent ...