[2604.04144] Many Preferences, Few Policies: Towards Scalable Language Model Personalization
About this article
Abstract page for arXiv paper 2604.04144: Many Preferences, Few Policies: Towards Scalable Language Model Personalization
Computer Science > Computation and Language arXiv:2604.04144 (cs) [Submitted on 5 Apr 2026] Title:Many Preferences, Few Policies: Towards Scalable Language Model Personalization Authors:Cheol Woo Kum, Jai Moondra, Roozbeh Nahavandi, Andrew Perrault, Milind Tambe, Swati Gupta View a PDF of the paper titled Many Preferences, Few Policies: Towards Scalable Language Model Personalization, by Cheol Woo Kum and 5 other authors View PDF HTML (experimental) Abstract:The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well ...