[2602.23400] U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation
About this article
Abstract page for arXiv paper 2602.23400: U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation
Computer Science > Machine Learning arXiv:2602.23400 (cs) [Submitted on 26 Feb 2026] Title:U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation Authors:Zezheng Wu, Rui Wang, Xinghe Cheng, Yang Shao, Qing Yang, Jiapu Wang, Jingwei Zhang View a PDF of the paper titled U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation, by Zezheng Wu and 6 other authors View PDF HTML (experimental) Abstract:Generative Recommendation (GenRec) typically leverages Large Language Models (LLMs) to redefine personalization as an instruction-driven sequence generation task. However, fine-tuning on user logs inadvertently encodes sensitive attributes into model parameters, raising critical privacy concerns. Existing Machine Unlearning (MU) techniques struggle to navigate this tension due to the Polysemy Dilemma, where neurons superimpose sensitive data with general reasoning patterns, leading to catastrophic utility loss under traditional gradient or pruning methods. To address this, we propose Utility-aware Contrastive AttenuatioN (U-CAN), a precision unlearning framework that operates on low-rank adapters. U-CAN quantifies risk by contrasting activations and focuses on neurons with asymmetric responses that are highly sensitive to the forgetting set but suppressed on the retention set. To safeguard performance, we introduce a utility-aware calibration mechanism that combines weight magnitudes with retentio...