[2604.04255] Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning
About this article
Abstract page for arXiv paper 2604.04255: Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning
Computer Science > Machine Learning arXiv:2604.04255 (cs) [Submitted on 5 Apr 2026] Title:Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning Authors:Aobo Chen, Chenxu Zhao, Chenglin Miao, Mengdi Huai View a PDF of the paper titled Towards Unveiling Vulnerabilities of Large Reasoning Models in Machine Unlearning, by Aobo Chen and 3 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) possess strong semantic understanding, driving significant progress in data mining applications. This is further enhanced by large reasoning models (LRMs), which provide explicit multi-step reasoning traces. On the other hand, the growing need for the right to be forgotten has driven the development of machine unlearning techniques, which aim to eliminate the influence of specific data from trained models without full retraining. However, unlearning may also introduce new security vulnerabilities by exposing additional interaction surfaces. Although many studies have investigated unlearning attacks, there is no prior work on LRMs. To bridge the gap, we first in this paper propose LRM unlearning attack that forces incorrect final answers while generating convincing but misleading reasoning traces. This objective is challenging due to non-differentiable logical constraints, weak optimization effect over long rationales, and discrete forget set selection. To overcome these challenges, we introduce a bi-level exact unlearning attack that in...