[2603.00159] FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation
About this article
Abstract page for arXiv paper 2603.00159: FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00159 (cs) [Submitted on 25 Feb 2026] Title:FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation Authors:Weiting Tan, Andy T. Liu, Ming Tu, Xinghua Qu, Philipp Koehn, Lu Lu View a PDF of the paper titled FlowPortrait: Reinforcement Learning for Audio-Driven Portrait Video Generation, by Weiting Tan and 5 other authors View PDF HTML (experimental) Abstract:Generating realistic talking-head videos remains challenging due to persistent issues such as imperfect lip synchronization, unnatural motion, and evaluation metrics that correlate poorly with human perception. We propose FlowPortrait, a reinforcement-learning framework for audio-driven portrait animation built on a multimodal backbone for autoregressive audio-to-video generation. FlowPortrait introduces a human-aligned evaluation system based on Multimodal Large Language Models (MLLMs) to assess lip-sync accuracy, expressiveness, and motion quality. These signals are combined with perceptual and temporal consistency regularizers to form a stable composite reward, which is used to post-train the generator via Group Relative Policy Optimization (GRPO). Extensive experiments, including both automatic evaluations and human preference studies, demonstrate that FlowPortrait consistently produces higher-quality talking-head videos, highlighting the effectiveness of reinforcement learning for portrait animation. Subjects: Computer Vi...