[2605.05227] Rethinking Data Curation in LLM Training: Online Reweighting Offers Better Generalization than Offline Methods
About this article
Abstract page for arXiv paper 2605.05227: Rethinking Data Curation in LLM Training: Online Reweighting Offers Better Generalization than Offline Methods
Computer Science > Machine Learning arXiv:2605.05227 (cs) [Submitted on 19 Apr 2026] Title:Rethinking Data Curation in LLM Training: Online Reweighting Offers Better Generalization than Offline Methods Authors:Wanru Zhao, Yihong Chen, Yuzhi Tang, Wentao Ma, Shengchao Hu, Shell Xu Hu, Alex Iacob, Abhinav Mehrotra, Nicholas D. Lane View a PDF of the paper titled Rethinking Data Curation in LLM Training: Online Reweighting Offers Better Generalization than Offline Methods, by Wanru Zhao and 8 other authors View PDF HTML (experimental) Abstract:Data curation is a critical yet under-explored area in large language model (LLM) training. Existing methods, such as data selection and mixing, operate in an offline paradigm, detaching themselves from training. This separation introduces engineering overhead and makes the curation brittle: the entire pipeline must be re-run under model/task shifts. Moreover, offline methods alter data size through hard filtering or resampling, often sacrificing data diversity and harming generalization. We propose to rethink data curation as an online reweighting problem, where sample importance is dynamically adjusted during training via loss weighting rather than static pre-processing. Specifically, we introduce ADAPT (Adaptive Data reweighting for Pretraining and FineTuning), a dynamic online framework that reweights training samples with adaptive per-sample learning rates guided by similarity-based quality signals, without changing the number of t...