[2602.01844] CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions

[2602.01844] CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions

arXiv - AI 4 min read Article

Summary

The paper presents CloDS, an unsupervised learning framework for cloth dynamics using visual data, addressing limitations of existing methods that require known physical properties.

Why It Matters

CloDS advances the field of computer vision by enabling the simulation of cloth dynamics without prior knowledge of physical properties, which is crucial for applications in robotics, animation, and virtual reality. This approach enhances the generalization capabilities of models in unknown conditions, making it a significant step forward in dynamic system modeling.

Key Takeaways

  • CloDS introduces a novel unsupervised learning framework for cloth dynamics.
  • The method utilizes multi-view visual observations to learn dynamics without prior physical property knowledge.
  • A dual-position opacity modulation technique is employed to handle complex deformations.
  • The framework demonstrates strong generalization capabilities for unseen configurations.
  • Code and visualization results are made publicly available for further research.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.01844 (cs) [Submitted on 2 Feb 2026 (v1), last revised 20 Feb 2026 (this version, v2)] Title:CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions Authors:Yuliang Zhan, Jian Li, Wenbing Huang, Wenbing Huang, Yang Liu, Hao Sun View a PDF of the paper titled CloDS: Visual-Only Unsupervised Cloth Dynamics Learning in Unknown Conditions, by Yuliang Zhan and 5 other authors View PDF HTML (experimental) Abstract:Deep learning has demonstrated remarkable capabilities in simulating complex dynamic systems. However, existing methods require known physical properties as supervision or inputs, limiting their applicability under unknown conditions. To explore this challenge, we introduce Cloth Dynamics Grounding (CDG), a novel scenario for unsupervised learning of cloth dynamics from multi-view visual observations. We further propose Cloth Dynamics Splatting (CloDS), an unsupervised dynamic learning framework designed for CDG. CloDS adopts a three-stage pipeline that first performs video-to-geometry grounding and then trains a dynamics model on the grounded meshes. To cope with large non-linear deformations and severe self-occlusions during grounding, we introduce a dual-position opacity modulation that supports bidirectional mapping between 2D observations and 3D geometry via mesh-based Gaussian splatting in video-to-geometry grounding stage. It jointly considers the absolute and relative po...

Related Articles

Machine Learning

Making an AI native sovereign computational stack

I’ve been working on a personal project that ended up becoming a kind of full computing stack: identity / trust protocol decentralized ch...

Reddit - Artificial Intelligence · 1 min ·
Llms

An attack class that passes every current LLM filter - no payload, no injection signature, no log trace

https://shapingrooms.com/research I published a paper today on something I've been calling postural manipulation. The short version: ordi...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

What tools are sr MLEs using? (clawdbot, openspec, wispr) [D]

I'm already blasting cursor, but I want to level up my output. I heard that these kind of AI tools and workflows are being asked in SF. W...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] looking for academic collaborators

hey there, i am currently working with a research group at auckland university. we are currently working on neurodegenerative diseases - ...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime