[2602.20055] To Move or Not to Move: Constraint-based Planning Enables Zero-Shot Generalization for Interactive Navigation
Summary
This paper presents a novel constraint-based planning framework for mobile robots, enabling zero-shot generalization in interactive navigation by allowing robots to manipulate their environment to complete tasks.
Why It Matters
As robots increasingly operate in cluttered environments, such as homes and warehouses, this research addresses a critical challenge in robotics: enabling effective navigation and task completion despite obstacles. The proposed framework enhances robot autonomy and efficiency, making it relevant for advancements in robotics and AI applications.
Key Takeaways
- Introduces Lifelong Interactive Navigation for mobile robots.
- Proposes an LLM-driven, constraint-based planning framework.
- Demonstrates improved task completion in cluttered environments.
- Combines reasoning and active perception for efficient navigation.
- Outperforms traditional and learning-based navigation methods.
Computer Science > Robotics arXiv:2602.20055 (cs) [Submitted on 23 Feb 2026] Title:To Move or Not to Move: Constraint-based Planning Enables Zero-Shot Generalization for Interactive Navigation Authors:Apoorva Vashisth (1), Manav Kulshrestha (1), Pranav Bakshi (2), Damon Conover (3), Guillaume Sartoretti (4), Aniket Bera (1) ((1) Purdue University, (2) IIT Kharagpur (3) DEVCOM Army Research Lab (4) National University of Singapore) View a PDF of the paper titled To Move or Not to Move: Constraint-based Planning Enables Zero-Shot Generalization for Interactive Navigation, by Apoorva Vashisth (1) and 6 other authors View PDF Abstract:Visual navigation typically assumes the existence of at least one obstacle-free path between start and goal, which must be discovered/planned by the robot. However, in real-world scenarios, such as home environments and warehouses, clutter can block all routes. Targeted at such cases, we introduce the Lifelong Interactive Navigation problem, where a mobile robot with manipulation abilities can move clutter to forge its own path to complete sequential object- placement tasks - each involving placing an given object (eg. Alarm clock, Pillow) onto a target object (eg. Dining table, Desk, Bed). To address this lifelong setting - where effects of environment changes accumulate and have long-term effects - we propose an LLM-driven, constraint-based planning framework with active perception. Our framework allows the LLM to reason over a structured scene...