[2604.00001] Two-Stage Optimizer-Aware Online Data Selection for Large Language Models
About this article
Abstract page for arXiv paper 2604.00001: Two-Stage Optimizer-Aware Online Data Selection for Large Language Models
Computer Science > Machine Learning arXiv:2604.00001 (cs) [Submitted on 8 Mar 2026] Title:Two-Stage Optimizer-Aware Online Data Selection for Large Language Models Authors:Fangxin Wang, Peyman Baghershahi, Langzhou He, Henry Peng Zou, Sourav Medya, Philip S. Yu View a PDF of the paper titled Two-Stage Optimizer-Aware Online Data Selection for Large Language Models, by Fangxin Wang and 5 other authors View PDF HTML (experimental) Abstract:Gradient-based data selection offers a principled framework for estimating sample utility in large language model (LLM) fine-tuning, but existing methods are mostly designed for offline settings. They are therefore less suited to online fine-tuning, where data arrives sequentially, sample utility is step-dependent, and the effective update geometry is shaped by adaptive optimizers. We propose an optimizer-aware framework for gradient-based online data selection and reweighting in LLM fine-tuning. Our key idea is to view online selection not as static sample ranking, but as shaping the next target-oriented update under the optimizer state. We formulate this as an optimizer-aware update-matching problem, establish its connection to second-order target utility, and show why subset-level construction must account for interactions and redundancy among selected samples. Based on this view, we develop a two-stage Filter-then-Weight algorithm that first filters geometrically useful candidates and then optimizes their coefficients. To make the fram...