[2605.07394] BalCapRL: A Balanced Framework for RL-Based MLLM Image Captioning
About this article
Abstract page for arXiv paper 2605.07394: BalCapRL: A Balanced Framework for RL-Based MLLM Image Captioning
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.07394 (cs) [Submitted on 8 May 2026] Title:BalCapRL: A Balanced Framework for RL-Based MLLM Image Captioning Authors:Shaokai Ye, Vasileios Saveris, Yihao Qian, Jiaming Hu, Elmira Amirloo, Peter Grasch View a PDF of the paper titled BalCapRL: A Balanced Framework for RL-Based MLLM Image Captioning, by Shaokai Ye and 5 other authors View PDF HTML (experimental) Abstract:Image captioning is one of the most fundamental tasks in computer vision. Owing to its open-ended nature, it has received significant attention in the era of multimodal large language models (MLLMs). In pursuit of ever more detailed and accurate captions, recent work has increasingly turned to reinforcement learning (RL). However, existing captioning-RL methods and evaluation metrics often emphasize a narrow notion of caption quality, inducing trade-offs across core dimensions of captioning. For example, utility-oriented objectives can encourage noisy, hallucinated, or overlong captions that improve downstream question answering while harming fluency, whereas arena-style objectives can favor fluent but generic descriptions with limited usefulness. To address this, we propose a more balanced RL framework that jointly optimizes utility-aware correctness, reference coverage, and linguistic quality. In order to effectively optimize the resulting continuous multi-objective reward formulation, we apply GDPO-style reward-decoupled normalization to...