[2602.12508] Monocular Reconstruction of Neural Tactile Fields
Summary
This paper presents a novel approach to robotic navigation using neural tactile fields, enabling robots to predict tactile responses from monocular RGB images for improved 3D environment interaction.
Why It Matters
The research addresses a critical challenge in robotics: enabling robots to navigate and interact with dynamic environments. By introducing neural tactile fields, this work enhances robotic perception and path planning, potentially leading to more effective and adaptable robotic systems in real-world applications.
Key Takeaways
- Neural tactile fields provide a new 3D representation for robots.
- The method predicts tactile responses from a single monocular RGB image.
- Empirical results show significant improvements in 3D reconstruction accuracy.
- Integrating tactile fields with path planners allows for smarter navigation.
- This approach could enhance robotic interaction in complex environments.
Computer Science > Robotics arXiv:2602.12508 (cs) [Submitted on 13 Feb 2026] Title:Monocular Reconstruction of Neural Tactile Fields Authors:Pavan Mantripragada, Siddhanth Deshmukh, Eadom Dessalene, Manas Desai, Yiannis Aloimonos View a PDF of the paper titled Monocular Reconstruction of Neural Tactile Fields, by Pavan Mantripragada and 4 other authors View PDF HTML (experimental) Abstract:Robots operating in the real world must plan through environments that deform, yield, and reconfigure under contact, requiring interaction-aware 3D representations that extend beyond static geometric occupancy. To address this, we introduce neural tactile fields, a novel 3D representation that maps spatial locations to the expected tactile response upon contact. Our model predicts these neural tactile fields from a single monocular RGB image -- the first method to do so. When integrated with off-the-shelf path planners, neural tactile fields enable robots to generate paths that avoid high-resistance objects while deliberately routing through low-resistance regions (e.g. foliage), rather than treating all occupied space as equally impassable. Empirically, our learning framework improves volumetric 3D reconstruction by $85.8\%$ and surface reconstruction by $26.7\%$ compared to state-of-the-art monocular 3D reconstruction methods (LRM and Direct3D). Comments: Subjects: Robotics (cs.RO); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV) Cite as: arXiv:2602.125...