[2602.14099] SemanticFeels: Semantic Labeling during In-Hand Manipulation
Summary
The paper presents SemanticFeels, a novel framework for semantic labeling during in-hand manipulation, enhancing robots' ability to classify materials through integrated vision and touch.
Why It Matters
As robots become more integrated into daily tasks, their ability to accurately perceive and classify materials is crucial for adaptive behavior. This research contributes to advancements in robotics, particularly in improving human-robot interaction and manipulation tasks.
Key Takeaways
- SemanticFeels integrates semantic labeling with neural implicit shape representation.
- The framework utilizes high-resolution tactile readings for material classification.
- Achieved an average matching accuracy of 79.87% in multi-material object manipulation.
- Demonstrates the importance of combining vision and touch in robotic applications.
- Potential applications include enhanced human-robot collaboration and adaptive manipulation.
Computer Science > Robotics arXiv:2602.14099 (cs) [Submitted on 15 Feb 2026] Title:SemanticFeels: Semantic Labeling during In-Hand Manipulation Authors:Anas Al Shikh Khalil, Haozhi Qi, Roberto Calandra View a PDF of the paper titled SemanticFeels: Semantic Labeling during In-Hand Manipulation, by Anas Al Shikh Khalil and 2 other authors View PDF HTML (experimental) Abstract:As robots become increasingly integrated into everyday tasks, their ability to perceive both the shape and properties of objects during in-hand manipulation becomes critical for adaptive and intelligent behavior. We present SemanticFeels, an extension of the NeuralFeels framework that integrates semantic labeling with neural implicit shape representation, from vision and touch. To illustrate its application, we focus on material classification: high-resolution Digit tactile readings are processed by a fine-tuned EfficientNet-B0 convolutional neural network (CNN) to generate local material predictions, which are then embedded into an augmented signed distance field (SDF) network that jointly predicts geometry and continuous material regions. Experimental results show that the system achieves a high correspondence between predicted and actual materials on both single- and multi-material objects, with an average matching accuracy of 79.87% across multiple manipulation trials on a multi-material object. Comments: Subjects: Robotics (cs.RO); Artificial Intelligence (cs.AI); Computer Vision and Pattern Recogn...