[2503.00035] Constraining Sequential Model Editing with Editing Anchor Compression
About this article
Abstract page for arXiv paper 2503.00035: Constraining Sequential Model Editing with Editing Anchor Compression
Computer Science > Computation and Language arXiv:2503.00035 (cs) [Submitted on 25 Feb 2025 (v1), last revised 10 Apr 2026 (this version, v2)] Title:Constraining Sequential Model Editing with Editing Anchor Compression Authors:Hao-Xiang Xu, Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang, Jia-Chen Gu View a PDF of the paper titled Constraining Sequential Model Editing with Editing Anchor Compression, by Hao-Xiang Xu and Jun-Yu Ma and Zhen-Hua Ling and Ningyu Zhang and Jia-Chen Gu View PDF HTML (experimental) Abstract:Large language models (LLMs) struggle with hallucinations due to false or outdated knowledge. Given the high resource demands of retraining these models, there is an increasing focus on developing model editing. However, the general abilities of LLMs across downstream tasks are prone to significant degradation during sequential editing. This paper statistically observes that the parameter matrix after editing exhibits a significant deviation compared to its previous state as the number of edits increases. This serious deviation affects the original knowledge associations within LLMs and leads to the degradation of their general abilities. To this end, a framework termed Editing Anchor Compression (EAC) is proposed to constrain the deviation of the parameter matrix during sequential editing. It compresses the editing information by selecting editing anchors that are important in encoding new relations without deviating too much from the original matrix, thereby preservi...