[2508.04329] Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning

[2508.04329] Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2508.04329: Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning

Computer Science > Machine Learning arXiv:2508.04329 (cs) [Submitted on 6 Aug 2025 (v1), last revised 28 Mar 2026 (this version, v5)] Title:Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning Authors:Ali Taheri, Alireza Taban, Qizhou Wang, Shanshan Ye, Abdolreza Mirzaei, Tongliang Liu, Bo Han View a PDF of the paper titled Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning, by Ali Taheri and 6 other authors View PDF Abstract:Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose capabilities. However, the efficacy of SFT hinges on data quality as well as data volume, otherwise it may result in limited performance gains or even degradation relative to the associated baselines. To mitigate such reliance, we suggest categorizing tokens within each corpus into two parts -- positive and negative tokens -- based on whether they are useful to improve model performance. Positive tokens can be trained in common ways, whereas negative tokens, which may lack essential semantics or be misleading, should be explicitly forgotten. Overall, the token categorization facilitates the model to learn less informative messages, and the forgetting guides the model on what information to learn more precisely. We conduct experiments across diverse and well-established benchmar...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Llms

OTHER AI PLATFORMS BETTER THAN CHATGPT (OPEN AI)

Share your thoughts submitted by /u/InnerNeedleworker347 [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Llms

Seeking Critique on Research Approach to Open Set Recognition (Novelty Detection) [R]

Hey guys, I'm an independent researcher working on a project that tries to address a very specific failure mode in LLMs and embedding bas...

Reddit - Machine Learning · 1 min ·
Google rolls out a native Gemini app for Mac | TechCrunch
Llms

Google rolls out a native Gemini app for Mac | TechCrunch

You can share anything on their screen with Gemini to get help with what they're looking at in the moment, including local files.

TechCrunch - AI · 3 min ·
Llms

Coherence under Constraint

I’ve been running some small experiments forcing LLMs into contradictions they can’t resolve. What surprised me wasn’t that they fail—it’...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime