[2508.04329] Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
About this article
Abstract page for arXiv paper 2508.04329: Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning
Computer Science > Machine Learning arXiv:2508.04329 (cs) [Submitted on 6 Aug 2025 (v1), last revised 28 Mar 2026 (this version, v5)] Title:Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning Authors:Ali Taheri, Alireza Taban, Qizhou Wang, Shanshan Ye, Abdolreza Mirzaei, Tongliang Liu, Bo Han View a PDF of the paper titled Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning, by Ali Taheri and 6 other authors View PDF Abstract:Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose capabilities. However, the efficacy of SFT hinges on data quality as well as data volume, otherwise it may result in limited performance gains or even degradation relative to the associated baselines. To mitigate such reliance, we suggest categorizing tokens within each corpus into two parts -- positive and negative tokens -- based on whether they are useful to improve model performance. Positive tokens can be trained in common ways, whereas negative tokens, which may lack essential semantics or be misleading, should be explicitly forgotten. Overall, the token categorization facilitates the model to learn less informative messages, and the forgetting guides the model on what information to learn more precisely. We conduct experiments across diverse and well-established benchmar...