[2603.01751] Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control
About this article
Abstract page for arXiv paper 2603.01751: Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control
Computer Science > Robotics arXiv:2603.01751 (cs) [Submitted on 2 Mar 2026] Title:Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control Authors:Peng Yu, Xin Wang, Ning Tan View a PDF of the paper titled Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control, by Peng Yu and 2 other authors View PDF HTML (experimental) Abstract:Continuum robots possess high flexibility and redundancy, making them well suited for safe interaction in complex environments, yet their continuous deformation and nonlinear dynamics pose fundamental challenges to perception, modeling, and control. Existing vision-based control approaches often rely on end-to-end learning, achieving shape regulation without explicit awareness of robot geometry or its interaction with the environment. Here, we introduce a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control. Robot shapes are encoded from multi-view planar images using a Bezier-curve representation, transforming visual observations into a compact and physically meaningful shape space that uniquely characterizes the robot's three-dimensional configuration. Based on this representation, neural ordinary differential equations are employed to self-model both shape and end-effector dynamics directly from data, enabling hybrid shape-position control without analytical models or dense body markers. The explicit geometric structure of the l...