[2603.22282] UniMotion: A Unified Framework for Motion-Text-Vision Understanding and Generation
About this article
Abstract page for arXiv paper 2603.22282: UniMotion: A Unified Framework for Motion-Text-Vision Understanding and Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22282 (cs) [Submitted on 23 Mar 2026] Title:UniMotion: A Unified Framework for Motion-Text-Vision Understanding and Generation Authors:Ziyi Wang, Xinshun Wang, Shuang Chen, Yang Cong, Mengyuan Liu View a PDF of the paper titled UniMotion: A Unified Framework for Motion-Text-Vision Understanding and Generation, by Ziyi Wang and 4 other authors View PDF HTML (experimental) Abstract:We present UniMotion, to our knowledge the first unified framework for simultaneous understanding and generation of human motion, natural language, and RGB images within a single architecture. Existing unified models handle only restricted modality subsets (e.g., Motion-Text or static Pose-Image) and predominantly rely on discrete tokenization, which introduces quantization errors and disrupts temporal continuity. UniMotion overcomes both limitations through a core principle: treating motion as a first-class continuous modality on equal footing with RGB. A novel Cross-Modal Aligned Motion VAE (CMA-VAE) and symmetric dual-path embedders construct parallel continuous pathways for Motion and RGB within a shared LLM backbone. To inject visual-semantic priors into motion representations without requiring images at inference, we propose Dual-Posterior KL Alignment (DPA), which distills a vision-fused encoder's richer posterior into the motion-only encoder. To address the cold-start problem -- where text supervision alone is too sparse...