[2511.18140] Observer-Actor: Active Vision Imitation Learning with Sparse-View Gaussian Splatting
About this article
Abstract page for arXiv paper 2511.18140: Observer-Actor: Active Vision Imitation Learning with Sparse-View Gaussian Splatting
Computer Science > Robotics arXiv:2511.18140 (cs) [Submitted on 22 Nov 2025 (v1), last revised 4 Mar 2026 (this version, v2)] Title:Observer-Actor: Active Vision Imitation Learning with Sparse-View Gaussian Splatting Authors:Yilong Wang, Cheng Qian, Ruomeng Fan, Edward Johns View a PDF of the paper titled Observer-Actor: Active Vision Imitation Learning with Sparse-View Gaussian Splatting, by Yilong Wang and 3 other authors View PDF HTML (experimental) Abstract:We propose Observer Actor (ObAct), a novel framework for active vision imitation learning in which the observer moves to optimal visual observations for the actor. We study ObAct on a dual-arm robotic system equipped with wrist-mounted cameras. At test time, ObAct dynamically assigns observer and actor roles: the observer arm constructs a 3D Gaussian Splatting (3DGS) representation from three images, virtually explores this to find an optimal camera pose, then moves to this pose; the actor arm then executes a policy using the observer's observations. This formulation enhances the clarity and visibility of both the object and the gripper in the policy's observations. As a result, we enable the training of ambidextrous policies on observations that remain closer to the occlusion-free training distribution, leading to more robust policies. We study this formulation with two existing imitation learning methods -- trajectory transfer and behavior cloning -- and experiments show that ObAct significantly outperforms static...