[2602.23400] U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation

[2602.23400] U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.23400: U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation

Computer Science > Machine Learning arXiv:2602.23400 (cs) [Submitted on 26 Feb 2026] Title:U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation Authors:Zezheng Wu, Rui Wang, Xinghe Cheng, Yang Shao, Qing Yang, Jiapu Wang, Jingwei Zhang View a PDF of the paper titled U-CAN: Utility-Aware Contrastive Attenuation for Efficient Unlearning in Generative Recommendation, by Zezheng Wu and 6 other authors View PDF HTML (experimental) Abstract:Generative Recommendation (GenRec) typically leverages Large Language Models (LLMs) to redefine personalization as an instruction-driven sequence generation task. However, fine-tuning on user logs inadvertently encodes sensitive attributes into model parameters, raising critical privacy concerns. Existing Machine Unlearning (MU) techniques struggle to navigate this tension due to the Polysemy Dilemma, where neurons superimpose sensitive data with general reasoning patterns, leading to catastrophic utility loss under traditional gradient or pruning methods. To address this, we propose Utility-aware Contrastive AttenuatioN (U-CAN), a precision unlearning framework that operates on low-rank adapters. U-CAN quantifies risk by contrasting activations and focuses on neurons with asymmetric responses that are highly sensitive to the forgetting set but suppressed on the retention set. To safeguard performance, we introduce a utility-aware calibration mechanism that combines weight magnitudes with retentio...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

Llms

[D] We reimplemented Claude Code entirely in Python — open source, works with local models

Hey everyone, We just released Claw Code Agent — a full Python reimplementation of the Claude Code agent architecture, based on the rever...

Reddit - Machine Learning · 1 min ·
Llms

[D] Production gaps in context-window compression for AI agent memory

've been working on AI memory infrastructure and recently spent a few weeks reading through the source code of an open-source context-win...

Reddit - Machine Learning · 1 min ·
Llms

How Claude Web tried to break out its container, provided all files on the system, scanned the networks, etc

Originally wasn't going to write about this - on one hand thought it's prolly already known, on the other hand I didn't feel like it was ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Combining the robot operating system with LLMs for natural-language control

Over the past few decades, robotics researchers have developed a wide range of increasingly advanced robots that can autonomously complet...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime