[2512.14395] Massive Editing for Large Language Models Based on Dynamic Weight Generation
About this article
Abstract page for arXiv paper 2512.14395: Massive Editing for Large Language Models Based on Dynamic Weight Generation
Computer Science > Artificial Intelligence arXiv:2512.14395 (cs) [Submitted on 16 Dec 2025 (v1), last revised 22 Mar 2026 (this version, v4)] Title:Massive Editing for Large Language Models Based on Dynamic Weight Generation Authors:Wentao Wan, Qiqing Lao, Zhiwei Xie, Hefeng Wu, Runnan Lin, Liang Lin, Keze Wang View a PDF of the paper titled Massive Editing for Large Language Models Based on Dynamic Weight Generation, by Wentao Wan and 6 other authors View PDF HTML (experimental) Abstract:Knowledge Editing (KE) is a field that studies how to modify some knowledge in Large Language Models (LLMs) at a low cost (compared to pre-training). Currently, performing large-scale edits on LLMs while ensuring the Reliability, Generality, and Locality metrics of the edits remain a challenge. This paper proposes a Massive editing approach for LLMs based on dynamic weight Generation (MeG). Our MeG involves attaching a dynamic weight neuron to specific layers of the LLMs and using a diffusion model to conditionally generate the weights of this neuron based on the input query required for the knowledge. This allows the use of adding a single dynamic weight neuron to achieve the goal of large-scale knowledge editing. Experiments show that our MeG can significantly improve the performance of large-scale KE in terms of Reliability, Generality, and Locality metrics compared to existing knowledge editing methods, particularly with a high percentage point increase in the absolute value index for...