[2603.01751] Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

[2603.01751] Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2603.01751: Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control

Computer Science > Robotics arXiv:2603.01751 (cs) [Submitted on 2 Mar 2026] Title:Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control Authors:Peng Yu, Xin Wang, Ning Tan View a PDF of the paper titled Shape-Interpretable Visual Self-Modeling Enables Geometry-Aware Continuum Robot Control, by Peng Yu and 2 other authors View PDF HTML (experimental) Abstract:Continuum robots possess high flexibility and redundancy, making them well suited for safe interaction in complex environments, yet their continuous deformation and nonlinear dynamics pose fundamental challenges to perception, modeling, and control. Existing vision-based control approaches often rely on end-to-end learning, achieving shape regulation without explicit awareness of robot geometry or its interaction with the environment. Here, we introduce a shape-interpretable visual self-modeling framework for continuum robots that enables geometry-aware control. Robot shapes are encoded from multi-view planar images using a Bezier-curve representation, transforming visual observations into a compact and physically meaningful shape space that uniquely characterizes the robot's three-dimensional configuration. Based on this representation, neural ordinary differential equations are employed to self-model both shape and end-effector dynamics directly from data, enabling hybrid shape-position control without analytical models or dense body markers. The explicit geometric structure of the l...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

[2603.23899] SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries
Machine Learning

[2603.23899] SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries

Abstract page for arXiv paper 2603.23899: SM-Net: Learning a Continuous Spectral Manifold from Multiple Stellar Libraries

arXiv - AI · 4 min ·
[2603.16629] MLLM-based Textual Explanations for Face Comparison
Llms

[2603.16629] MLLM-based Textual Explanations for Face Comparison

Abstract page for arXiv paper 2603.16629: MLLM-based Textual Explanations for Face Comparison

arXiv - AI · 4 min ·
[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation
Llms

[2603.15159] To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

Abstract page for arXiv paper 2603.15159: To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

arXiv - AI · 4 min ·
[2603.14375] The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics
Machine Learning

[2603.14375] The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics

Abstract page for arXiv paper 2603.14375: The Pulse of Motion: Measuring Physical Frame Rate from Visual Dynamics

arXiv - AI · 4 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime