[2506.20746] Dynamic Weight Grafting: Localizing Finetuned Factual Knowledge in Transformers
About this article
Abstract page for arXiv paper 2506.20746: Dynamic Weight Grafting: Localizing Finetuned Factual Knowledge in Transformers
Computer Science > Machine Learning arXiv:2506.20746 (cs) [Submitted on 25 Jun 2025 (v1), last revised 28 Feb 2026 (this version, v3)] Title:Dynamic Weight Grafting: Localizing Finetuned Factual Knowledge in Transformers Authors:Todd Nief, David Reber, Sean Richardson, Ari Holtzman View a PDF of the paper titled Dynamic Weight Grafting: Localizing Finetuned Factual Knowledge in Transformers, by Todd Nief and 3 other authors View PDF HTML (experimental) Abstract:When an LLM learns a new fact during finetuning (e.g., new movie releases, newly elected pope, etc.), where does this information go? Are entities enriched with relation information immediately, or do models recall information just-in-time before a prediction? Or, are "all of the above" true, with LLMs implementing multiple redundant heuristics? Existing localization approaches (e.g., activation patching) are ill-suited for this analysis because they usually replace parts of the residual stream, thus overriding previous information. To fill this interpretability gap, we propose dynamic weight grafting, an analysis technique that selectively grafts subsets of weights from a finetuned model onto a pretrained model. Using this technique, we show two separate pathways for retrieving finetuned relation information: 1) "enriching" the residual stream with relation information while processing the tokens that correspond to an entity (e.g., "Zendaya" in "Zendaya co-starred with Timothée Chalamet" and 2) "recalling" this inf...