[2512.14549] Dual-objective Language Models: Training Efficiency Without Overfitting
About this article
Abstract page for arXiv paper 2512.14549: Dual-objective Language Models: Training Efficiency Without Overfitting
Computer Science > Computation and Language arXiv:2512.14549 (cs) [Submitted on 16 Dec 2025 (v1), last revised 27 Mar 2026 (this version, v3)] Title:Dual-objective Language Models: Training Efficiency Without Overfitting Authors:David Samuel, Lucas Georges Gabriel Charpentier View a PDF of the paper titled Dual-objective Language Models: Training Efficiency Without Overfitting, by David Samuel and Lucas Georges Gabriel Charpentier View PDF HTML (experimental) Abstract:This paper combines autoregressive and masked-diffusion training objectives without any architectural modifications, resulting in flexible language models that outperform single-objective models. Autoregressive modeling has been a popular approach, partly because of its training efficiency; however, that comes at the cost of sensitivity to overfitting. On the other hand, masked-diffusion models are less efficient to train while being more resilient to overfitting. In this work, we demonstrate that dual-objective training achieves the best of both worlds. To derive the optimal balance between both objectives, we train and evaluate 50 language models under varying levels of data repetition. We show that it is optimal to combine both objectives under all evaluated settings and that the optimal balance is similar whether targeting autoregressive or masked-diffusion downstream performance. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2512.14549 [cs.CL] (or arXiv:2512...