[2509.04784] Post-training Large Language Models for Diverse High-Quality Responses
About this article
Abstract page for arXiv paper 2509.04784: Post-training Large Language Models for Diverse High-Quality Responses
Computer Science > Computation and Language arXiv:2509.04784 (cs) [Submitted on 5 Sep 2025 (v1), last revised 2 Mar 2026 (this version, v3)] Title:Post-training Large Language Models for Diverse High-Quality Responses Authors:Yilei Chen, Souradip Chakraborty, Lorenz Wolf, Yannis Paschalidis, Aldo Pacchiano View a PDF of the paper titled Post-training Large Language Models for Diverse High-Quality Responses, by Yilei Chen and 4 other authors View PDF HTML (experimental) Abstract:Reinforcement learning (RL) has emerged as a popular method for post-training large language models (LLMs). While improving the model's performance on downstream tasks, it often reduces the model's output diversity, leading to narrow, canonical responses. Existing methods to enhance diversity are limited, either by operating at inference time or by focusing on surface-level differences. We propose a novel training method named DQO (Diversity Quality Optimization) based on determinantal point processes (DPPs) to jointly optimize LLMs for quality and semantic diversity. Our approach samples and embeds a group of responses for each prompt, then uses the determinant of a kernel-based similarity matrix to measure diversity as the volume spanned by the embeddings of these responses. DQO is flexible and can be applied on top of existing RL algorithms. Experiments across instruction-following, summarization, story generation, and reasoning tasks demonstrate that our method substantially improves semantic di...