[2512.22065] StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars
About this article
Abstract page for arXiv paper 2512.22065: StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars
Computer Science > Computer Vision and Pattern Recognition arXiv:2512.22065 (cs) [Submitted on 26 Dec 2025 (v1), last revised 28 Mar 2026 (this version, v2)] Title:StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars Authors:Zhiyao Sun, Ziqiao Peng, Yifeng Ma, Yi Chen, Zhengguang Zhou, Zixiang Zhou, Guozhen Zhang, Youliang Zhang, Yuan Zhou, Qinglin Lu, Yong-Jin Liu View a PDF of the paper titled StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars, by Zhiyao Sun and 10 other authors View PDF HTML (experimental) Abstract:Real-time, streaming interactive avatars represent a critical yet challenging goal in digital human research. Although diffusion-based human avatar generation methods achieve remarkable success, their non-causal architecture and high computational costs make them unsuitable for streaming. Moreover, existing interactive approaches are typically restricted to the head-and-shoulder region, limiting their ability to produce gestures and body motions. To address these challenges, we propose a two-stage autoregressive adaptation and acceleration framework that applies autoregressive distillation and adversarial refinement to adapt a high-fidelity human video diffusion model for real-time, interactive streaming. To ensure long-term stability and consistency, we introduce three key components: a Reference Sink, a Reference-Anchored Positional Re-encoding (RAPR) strategy, and a Consistency-Aware Discriminator. ...