[2603.22316] ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography
About this article
Abstract page for arXiv paper 2603.22316: ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography
Computer Science > Machine Learning arXiv:2603.22316 (cs) [Submitted on 20 Mar 2026] Title:ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography Authors:Jing Xu, Weiqiang Wang, Cunjian Chen, Jun Liu, Qiuhong Ke View a PDF of the paper titled ST-GDance++: A Scalable Spatial-Temporal Diffusion for Long-Duration Group Choreography, by Jing Xu and 4 other authors View PDF HTML (experimental) Abstract:Group dance generation from music requires synchronizing multiple dancers while maintaining spatial coordination, making it highly relevant to applications such as film production, gaming, and animation. Recent group dance generation models have achieved promising generation quality, but they remain difficult to deploy in interactive scenarios due to bidirectional attention dependencies. As the number of dancers and the sequence length increase, the attention computation required for aligning music conditions with motion sequences grows quadratically, leading to reduced efficiency and increased risk of motion collisions. Effectively modeling dense spatial-temporal interactions is therefore essential, yet existing methods often struggle to capture such complexity, resulting in limited scalability and unstable multi-dancer coordination. To address these challenges, we propose ST-GDance++, a scalable framework that decouples spatial and temporal dependencies to enable efficient and collision-aware group choreography generation. For spatial modeling, ...