[2603.05659] When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On
About this article
Abstract page for arXiv paper 2603.05659: When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.05659 (cs) [Submitted on 5 Mar 2026 (v1), last revised 31 Mar 2026 (this version, v2)] Title:When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On Authors:Wisdom Ikezogwo, Mehmet Saygin Seyfioglu, Ranjay Krishna, Karim Bouyarmane View a PDF of the paper titled When Rubrics Fail: Error Enumeration as Reward in Reference-Free RL Post-Training for Virtual Try-On, by Wisdom Ikezogwo and 3 other authors View PDF HTML (experimental) Abstract:Reinforcement learning with verifiable rewards (RLVR) and Rubrics as Rewards (RaR) have driven strong gains in domains with clear correctness signals and even in subjective domains by synthesizing evaluation criteria from ideal reference answers. But many real-world tasks admit multiple valid outputs and lack the single ideal answer that rubric generation depends on. We identify this reference-free setting as a gap in current post-training methods and propose Implicit Error Counting (IEC) to fill it. Instead of checking what a response gets right against a rubric, IEC enumerates what it gets wrong, applying severity-weighted scores across task-relevant axes and converting them into calibrated per-aspect rewards. We show that naïve explicit enumeration is too noisy for stable optimization, and that two design choices: implicit score emission and group calibration are necessary to make error counting a reliable reward. As a case...