[2603.03536] SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems
About this article
Abstract page for arXiv paper 2603.03536: SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems
Computer Science > Computation and Language arXiv:2603.03536 (cs) [Submitted on 3 Mar 2026] Title:SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems Authors:Haochang Hao, Yifan Xu, Xinzhuo Li, Yingqiang Ge, Lu Cheng View a PDF of the paper titled SafeCRS: Personalized Safety Alignment for LLM-Based Conversational Recommender Systems, by Haochang Hao and 4 other authors View PDF HTML (experimental) Abstract:Current LLM-based conversational recommender systems (CRS) primarily optimize recommendation accuracy and user satisfaction. We identify an underexplored vulnerability in which recommendation outputs may negatively impact users by violating personalized safety constraints, when individualized safety sensitivities -- such as trauma triggers, self-harm history, or phobias -- are implicitly inferred from the conversation but not respected during recommendation. We formalize this challenge as personalized CRS safety and introduce SafeRec, a new benchmark dataset designed to systematically evaluate safety risks in LLM-based CRS under user-specific constraints. To further address this problem, we propose SafeCRS, a safety-aware training framework that integrates Safe Supervised Fine-Tuning (Safe-SFT) with Safe Group reward-Decoupled Normalization Policy Optimization (Safe-GDPO) to jointly optimize recommendation quality and personalized safety alignment. Extensive experiments on SafeRec demonstrate that SafeCRS reduces safety violation rate...