[2603.19807] Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision
About this article
Abstract page for arXiv paper 2603.19807: Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.19807 (cs) [Submitted on 20 Mar 2026] Title:Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision Authors:Jiyeong Kim, Yerim So, Hyesong Choi, Uiwon Hwang, Dongbo Min View a PDF of the paper titled Enhancing Alignment for Unified Multimodal Models via Semantically-Grounded Supervision, by Jiyeong Kim and 4 other authors View PDF HTML (experimental) Abstract:Unified Multimodal Models (UMMs) have emerged as a promising paradigm that integrates multimodal understanding and generation within a unified modeling framework. However, current generative training paradigms suffer from inherent limitations. We present Semantically-Grounded Supervision (SeGroS), a fine-tuning framework designed to resolve the granularity mismatch and supervisory redundancy in UMMs. At its core, we propose a novel visual grounding map to construct two complementary supervision signals. First, we formulate semantic Visual Hints to compensate for the sparsity of text prompts. Second, we generate a semantically-grounded Corrupted Input to explicitly enhance the supervision of masking-based UMMs by restricting the reconstruction loss to core text-aligned regions. Extensive evaluations on GenEval, DPGBench, and CompBench demonstrate that SeGroS significantly improves generation fidelity and cross-modal alignment across various UMM architectures. Subjects: Computer Vision and Pattern Recognition (cs.CV); A...