[2603.22187] Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement
About this article
Abstract page for arXiv paper 2603.22187: Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22187 (cs) [Submitted on 23 Mar 2026] Title:Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement Authors:Junrong Guo, Shancheng Fang, Yadong Qu, Hongtao Xie View a PDF of the paper titled Seeing is Improving: Visual Feedback for Iterative Text Layout Refinement, by Junrong Guo and 3 other authors View PDF Abstract:Recent advances in Multimodal Large Language Models (MLLMs) have enabled automated generation of structured layouts from natural language descriptions. Existing methods typically follow a code-only paradigm that generates code to represent layouts, which are then rendered by graphic engines to produce final images. However, they are blind to the rendered visual outcome, making it difficult to guarantee readability and aesthetics. In this paper, we identify visual feedback as a critical factor in layout generation and propose Visual Feedback Layout Model (VFLM), a self-improving framework that leverages visual feedback iterative refinement. VFLM is capable of performing adaptive reflective generation, which leverages visual information to reflect on previous issues and iteratively generates outputs until satisfactory quality is achieved. It is achieved through reinforcement learning with a visually grounded reward model that incorporates OCR accuracy. By rewarding only the final generated outcome, we can effectively stimulate the model's iterative and reflective generative c...