[2603.04191] Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions
About this article
Abstract page for arXiv paper 2603.04191: Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions
Computer Science > Artificial Intelligence arXiv:2603.04191 (cs) [Submitted on 4 Mar 2026] Title:Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions Authors:Qianyun Guo, Yibo Li, Yue Liu, Bryan Hooi View a PDF of the paper titled Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions, by Qianyun Guo and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly serving as personal assistants, where users share complex and diverse preferences over extended interactions. However, assessing how well LLMs can follow these preferences in realistic, long-term situations remains underexplored. This work proposes RealPref, a benchmark for evaluating realistic preference-following in personalized user-LLM interactions. RealPref features 100 user profiles, 1300 personalized preferences, four types of preference expression (ranging from explicit to implicit), and long-horizon interaction histories. It includes three types of test questions (multiple-choice, true-or-false, and open-ended), with detailed rubrics for LLM-as-a-judge evaluation. Results indicate that LLM performance significantly drops as context length grows and preference expression becomes more implicit, and that generalizing user preference understanding to unseen scenarios poses further challenges. RealPref and these findings provide a foundation for...