[2603.00144] Disentangled Hierarchical VAE for 3D Human-Human Interaction Generation
About this article
Abstract page for arXiv paper 2603.00144: Disentangled Hierarchical VAE for 3D Human-Human Interaction Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00144 (cs) [Submitted on 24 Feb 2026] Title:Disentangled Hierarchical VAE for 3D Human-Human Interaction Generation Authors:Zichen Geng, Zeeshan Hayder, Bo Miao, Jian Liu, Wei Liu, Ajmal Mian View a PDF of the paper titled Disentangled Hierarchical VAE for 3D Human-Human Interaction Generation, by Zichen Geng and 5 other authors View PDF HTML (experimental) Abstract:Generating realistic 3D Human-Human Interaction (HHI) requires coherent modeling of the physical plausibility of the agents and their interaction semantics. Existing methods compress all motion information into a single latent representation, limiting their ability to capture fine-grained actions and inter-agent interactions. This often leads to semantic misalignment and physically implausible artifacts, such as penetration or missed contact. We propose Disentangled Hierarchical Variational Autoencoder (DHVAE) based latent diffusion for structured and controllable HHI generation. DHVAE explicitly disentangles the global interaction context and individual motion patterns into a decoupled latent structure by employing a CoTransformer module. To mitigate implausible and physically inconsistent contacts in HHI, we incorporate contrastive learning constraints with our DHVAE to promote a more discriminative and physically plausible latent interaction space. For high-fidelity interaction synthesis, DHVAE employs a DDIM-based diffusion denoising proc...