[2603.25145] Learning to Rank Caption Chains for Video-Text Alignment
About this article
Abstract page for arXiv paper 2603.25145: Learning to Rank Caption Chains for Video-Text Alignment
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.25145 (cs) [Submitted on 26 Mar 2026] Title:Learning to Rank Caption Chains for Video-Text Alignment Authors:Ansel Blume, Burak Uzkent, Shalini Chaudhuri, Garin Kessler View a PDF of the paper titled Learning to Rank Caption Chains for Video-Text Alignment, by Ansel Blume and 3 other authors View PDF HTML (experimental) Abstract:Direct preference optimization (DPO) is an effective technique to train language models to generate preferred over dispreferred responses. However, this binary "winner-takes-all" approach is suboptimal for vision-language models whose response quality is highly dependent on visual content. In particular, a response may still be faithful to the visual inputs even if it is less preferable than an alternative. The standard Bradley-Terry DPO formulation lacks this nuance, upweighting winning responses without sufficient regard for whether the "losing" response still maintains high visual fidelity. In this work, we investigate ranking optimization as an alternative that more precisely situates responses' faithfulness to visual inputs. We focus on video-text alignment using detailed video captions, proposing a method to generate challenging, totally ordered caption chains at scale through repeated caption degradation. Our results show ranking optimization outperforms binary DPO for long-form content generation and assessment, and importantly, we find that these approaches require finet...