[2603.20100] An Empirical Study of SFT-DPO Interaction and Parameterization in Small Language Models
About this article
Abstract page for arXiv paper 2603.20100: An Empirical Study of SFT-DPO Interaction and Parameterization in Small Language Models
Computer Science > Computation and Language arXiv:2603.20100 (cs) [Submitted on 20 Mar 2026] Title:An Empirical Study of SFT-DPO Interaction and Parameterization in Small Language Models Authors:Yuming Feng, Christy Yang View a PDF of the paper titled An Empirical Study of SFT-DPO Interaction and Parameterization in Small Language Models, by Yuming Feng and Christy Yang View PDF HTML (experimental) Abstract:Direct Preference Optimization (DPO) is widely used after supervised fine-tuning (SFT) to align language models, yet empirical behavior under small backbones and modest data is under-specified. We systematically compare SFT-only, DPO-only, and staged SFT-to-DPO training alongside full fine-tuning (FFT) versus LoRA on a GPT-2-scale decoder, evaluating paraphrase detection and Shakespearean sonnet continuation. DPO yields small, task-dependent gains over strong SFT and can match competitive SFT accuracy without a warm start when the preference construction closely parallels the supervised objective. In contrast, parameterization dominates: FFT consistently outperforms LoRA at matched training depth, and LoRA does not reduce wall-clock time on our hardware. These findings indicate that, in this small-scale regime, supervised full-parameter adaptation remains the primary performance lever, while preference optimization and low-rank adaptation provide limited marginal returns. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.201...