[2603.05425] RelaxFlow: Text-Driven Amodal 3D Generation
About this article
Abstract page for arXiv paper 2603.05425: RelaxFlow: Text-Driven Amodal 3D Generation
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.05425 (cs) [Submitted on 5 Mar 2026] Title:RelaxFlow: Text-Driven Amodal 3D Generation Authors:Jiayin Zhu, Guoji Fu, Xiaolu Liu, Qiyuan He, Yicong Li, Angela Yao View a PDF of the paper titled RelaxFlow: Text-Driven Amodal 3D Generation, by Jiayin Zhu and 5 other authors View PDF HTML (experimental) Abstract:Image-to-3D generation faces inherent semantic ambiguity under occlusion, where partial observation alone is often insufficient to determine object category. In this work, we formalize text-driven amodal 3D generation, where text prompts steer the completion of unseen regions while strictly preserving input observation. Crucially, we identify that these objectives demand distinct control granularities: rigid control for the observation versus relaxed structural control for the prompt. To this end, we propose RelaxFlow, a training-free dual-branch framework that decouples control granularity via a Multi-Prior Consensus Module and a Relaxation Mechanism. Theoretically, we prove that our relaxation is equivalent to applying a low-pass filter on the generative vector field, which suppresses high-frequency instance details to isolate geometric structure that accommodates the observation. To facilitate evaluation, we introduce two diagnostic benchmarks, ExtremeOcc-3D and AmbiSem-3D. Extensive experiments demonstrate that RelaxFlow successfully steers the generation of unseen regions to match the prompt int...