[2603.20939] User Preference Modeling for Conversational LLM Agents: Weak Rewards from Retrieval-Augmented Interaction
About this article
Abstract page for arXiv paper 2603.20939: User Preference Modeling for Conversational LLM Agents: Weak Rewards from Retrieval-Augmented Interaction
Computer Science > Computation and Language arXiv:2603.20939 (cs) [Submitted on 21 Mar 2026] Title:User Preference Modeling for Conversational LLM Agents: Weak Rewards from Retrieval-Augmented Interaction Authors:Yuren Hao, Shuhaib Mehri, ChengXiang Zhai, Dilek Hakkani-Tür View a PDF of the paper titled User Preference Modeling for Conversational LLM Agents: Weak Rewards from Retrieval-Augmented Interaction, by Yuren Hao and 3 other authors View PDF HTML (experimental) Abstract:Large language models are increasingly used as personal assistants, yet most lack a persistent user model, forcing users to repeatedly restate preferences across sessions. We propose Vector-Adapted Retrieval Scoring (VARS), a pipeline-agnostic, frozen-backbone framework that represents each user with long-term and short-term vectors in a shared preference space and uses these vectors to bias retrieval scoring over structured preference memory. The vectors are updated online from weak scalar rewards from users' feedback, enabling personalization without per-user fine-tuning. We evaluate on \textsc{MultiSessionCollab}, an online multi-session collaboration benchmark with rich user preference profiles, across math and code tasks. Under frozen backbones, the main benefit of user-aware retrieval is improved interaction efficiency rather than large gains in raw task accuracy: our full VARS agent achieves the strongest overall performance, matches a strong Reflection baseline in task success, and reduces t...