[2505.06182] Apple: Toward General Active Perception via Reinforcement Learning
About this article
Abstract page for arXiv paper 2505.06182: Apple: Toward General Active Perception via Reinforcement Learning
Computer Science > Robotics arXiv:2505.06182 (cs) [Submitted on 9 May 2025 (v1), last revised 27 Feb 2026 (this version, v4)] Title:Apple: Toward General Active Perception via Reinforcement Learning Authors:Tim Schneider, Cristiana de Farias, Roberto Calandra, Liming Chen, Jan Peters View a PDF of the paper titled Apple: Toward General Active Perception via Reinforcement Learning, by Tim Schneider and 4 other authors View PDF HTML (experimental) Abstract:Active perception is a fundamental skill that enables us humans to deal with uncertainty in our inherently partially observable environment. For senses such as touch, where the information is sparse and local, active perception becomes crucial. In recent years, active perception has emerged as an important research domain in robotics. However, current methods are often bound to specific tasks or make strong assumptions, which limit their generality. To address this gap, this work introduces APPLE (Active Perception Policy Learning) - a novel framework that leverages reinforcement learning (RL) to address a range of different active perception problems. APPLE jointly trains a transformer-based perception module and decision-making policy with a unified optimization objective, learning how to actively gather information. By design, APPLE is not limited to a specific task and can, in principle, be applied to a wide range of active perception problems. We evaluate two variants of APPLE across different tasks, including tactile...