[2603.19294] Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data
About this article
Abstract page for arXiv paper 2603.19294: Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data
Computer Science > Machine Learning arXiv:2603.19294 (cs) [Submitted on 10 Mar 2026] Title:Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data Authors:Hyunji Nam, Haoran Li, Natasha Jaques View a PDF of the paper titled Maximizing mutual information between user-contexts and responses improve LLM personalization with no additional data, by Hyunji Nam and 2 other authors View PDF HTML (experimental) Abstract:While post-training has successfully improved large language models (LLMs) across a variety of domains, these gains heavily rely on human-labeled data or external verifiers. Existing data has already been exploited, and new high-quality data is expensive to collect. More fundamentally, true intelligence goes far beyond tasks that are easily verifiable. Therefore, we need self-improvement frameworks that allow models to improve without external oversight. We propose *Mutual Information Preference Optimization (MIPO)*, a contrastive data augmentation method that constructs preference pairs by generating a positive response conditioning on the correct prompt, and a negative response by conditioning on a random, unrelated prompt. We show that using Direct Preference Optimization (DPO) to learn from this paired data maximizes pointwise conditional mutual information (MI) (under the base LLM) between prompts and model responses. Empirical results with various-sized Llama- and Qwen-Instruct models show that when...