[2512.16145] MRG-R1: Reinforcement Learning for Clinically Aligned Medical Report Generation
About this article
Abstract page for arXiv paper 2512.16145: MRG-R1: Reinforcement Learning for Clinically Aligned Medical Report Generation
Computer Science > Computation and Language arXiv:2512.16145 (cs) [Submitted on 18 Dec 2025 (v1), last revised 27 Mar 2026 (this version, v2)] Title:MRG-R1: Reinforcement Learning for Clinically Aligned Medical Report Generation Authors:Pengyu Wang, Shuchang Ye, Usman Naseem, Jinman Kim View a PDF of the paper titled MRG-R1: Reinforcement Learning for Clinically Aligned Medical Report Generation, by Pengyu Wang and 3 other authors View PDF HTML (experimental) Abstract:Medical report generation aims to automatically produce radiology-style reports from medical images, supporting efficient and accurate clinical this http URL, existing approaches predominately rely on token-level likelihood training, which favors local lexical matching and leaves clinical correctness under-specified in the training objective. This behavior can be attributed to token-level likelihood optimization, which rewards surface-form agreement and therefore fails to directly encode constraints on medically accurate findings. To address this objective mismatch, we introduce a semantic-driven reinforcement learning (SRL) framework for medical report generation, named MRG-R1, which directly optimizes report-level clinical correctness rather than token-level likelihood. The key module is a clinically grounded report-level reward function, which reinforces semantic agreement in clinically relevant findings between generated and reference reports, thereby enabling learning signals that explicitly constrain me...