[2604.01506] Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking
About this article
Abstract page for arXiv paper 2604.01506: Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking
Computer Science > Machine Learning arXiv:2604.01506 (cs) [Submitted on 2 Apr 2026] Title:Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking Authors:Zhanliang Wang, Hongzhuo Chen, Quan Minh Nguyen, Mian Umair Ahsan, Kai Wang View a PDF of the paper titled Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking, by Zhanliang Wang and 4 other authors View PDF HTML (experimental) Abstract:Long-tailed classification, where a small number of frequent classes dominate many rare ones, remains challenging because models systematically favor frequent classes at inference time. Existing post-hoc methods such as logit adjustment address this by adding a fixed classwise offset to the base-model logits. However, the correction required to restore the relative ranking of two classes need not be constant across inputs, and a fixed offset cannot adapt to such variation. We study this problem through Bayes-optimal reranking on a base-model top-k shortlist. The gap between the optimal score and the base score, the residual correction, decomposes into a classwise component that is constant within each class, and a pairwise component that depends on the input and competing labels. When the residual is purely classwise, a fixed offset suffices to recover the Bayes-optimal ordering. We further show that when the same label pair induces incompatible ordering constraints across contexts, no fixed offset can achieve this recovery. Th...