[2603.09723] RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation
About this article
Abstract page for arXiv paper 2603.09723: RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation
Computer Science > Computation and Language arXiv:2603.09723 (cs) [Submitted on 10 Mar 2026 (v1), last revised 27 Apr 2026 (this version, v2)] Title:RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation Authors:Sihong Wu, Yiling Ma, Yilun Zhao, Tiansheng Hu, Owen Jiang, Manasi Patwardhan, Arman Cohan View a PDF of the paper titled RbtAct: Rebuttal as Supervision for Actionable Review Feedback Generation, by Sihong Wu and 6 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly used across the scientific workflow, including to draft peer-review reports. However, many AI-generated reviews are superficial and insufficiently actionable, leaving authors without concrete, implementable guidance and motivating the gap this work addresses. We propose RbtAct, which targets actionable review feedback generation and places existing peer review rebuttal at the center of learning. Rebuttals show which reviewer comments led to concrete revisions or specific plans, and which were only defended. Building on this insight, we leverage rebuttal as implicit supervision to directly optimize a feedback generator for actionability. To support this objective, we propose a new task called perspective-conditioned segment-level review feedback generation, in which the model is required to produce a single focused comment based on the complete paper and a specified perspective such as experiments and writing. We also build a large dataset...