[2604.04982] CURE:Circuit-Aware Unlearning for LLM-based Recommendation
About this article
Abstract page for arXiv paper 2604.04982: CURE:Circuit-Aware Unlearning for LLM-based Recommendation
Computer Science > Information Retrieval arXiv:2604.04982 (cs) [Submitted on 4 Apr 2026] Title:CURE:Circuit-Aware Unlearning for LLM-based Recommendation Authors:Ziheng Chen, Jiali Cheng, Zezhong Fan, Hadi Amiri, Yunzhi Yao, Xiangguo Sun, Yang Zhang View a PDF of the paper titled CURE:Circuit-Aware Unlearning for LLM-based Recommendation, by Ziheng Chen and 6 other authors View PDF HTML (experimental) Abstract:Recent advances in large language models (LLMs) have opened new opportunities for recommender systems by enabling rich semantic understanding and reasoning about user interests and item attributes. However, as privacy regulations tighten, incorporating user data into LLM-based recommendation (LLMRec) introduces significant privacy risks, making unlearning algorithms increasingly crucial for practical deployment. Despite growing interest in LLMRec unlearning, most existing approaches formulate unlearning as a weighted combination of forgetting and retaining objectives while updating model parameters in a uniform manner. Such formulations inevitably induce gradient conflicts between the two objectives, leading to unstable optimization and resulting in either ineffective unlearning or severe degradation of model utility. Moreover, the unlearning procedure remains largely black-box, undermining its transparency and trustworthiness. To tackle these challenges, we propose CURE, a circuit-aware unlearning framework that disentangles model components into functionally distin...