[2603.15033] Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion
About this article
Abstract page for arXiv paper 2603.15033: Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion
Computer Science > Machine Learning arXiv:2603.15033 (cs) [Submitted on 16 Mar 2026 (v1), last revised 24 Mar 2026 (this version, v2)] Title:Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion Authors:Sonia Laguna, Jorge da Silva Goncalves, Moritz Vandenhirtz, Alain Ryser, Irene Cannistraci, Julia E. Vogt View a PDF of the paper titled Rethinking Machine Unlearning: Models Designed to Forget via Key Deletion, by Sonia Laguna and Jorge da Silva Goncalves and Moritz Vandenhirtz and Alain Ryser and Irene Cannistraci and Julia E. Vogt View PDF HTML (experimental) Abstract:Machine unlearning is rapidly becoming a practical requirement, driven by privacy regulations, data errors, and the need to remove harmful or corrupted training samples. Despite this, most existing methods tackle the problem purely from a post-hoc perspective. They attempt to erase the influence of targeted training samples through parameter updates that typically require access to the full training data. This creates a mismatch with real deployment scenarios where unlearning requests can be anticipated, revealing a fundamental limitation of post-hoc approaches. We propose unlearning by design, a novel paradigm in which models are directly trained to support forgetting as an inherent capability. We instantiate this idea with Machine UNlearning via KEY deletion (MUNKEY), a memory augmented transformer that decouples instance-specific memorization from model weights. Here, unlearning corre...