[2602.15894] Quality-constrained Entropy Maximization Policy Optimization for LLM Diversity
Summary
This paper presents Quality-constrained Entropy Maximization Policy Optimization (QEMPO), a method to enhance diversity in large language model outputs while maintaining quality, addressing a critical challenge in AI alignment.
Why It Matters
As AI systems increasingly rely on large language models, ensuring output diversity without sacrificing quality becomes essential. This research provides a novel approach to balance these competing needs, potentially influencing future AI model development and applications.
Key Takeaways
- QEMPO decomposes the alignment task into quality and diversity distributions.
- The method enhances output diversity while maintaining performance comparable to existing techniques.
- Both online and offline training methods are proposed for policy optimization.
Computer Science > Computation and Language arXiv:2602.15894 (cs) [Submitted on 11 Feb 2026] Title:Quality-constrained Entropy Maximization Policy Optimization for LLM Diversity Authors:Haihui Pan, Yuzhong Hong, Shaoke Lv, Junwei Bao, Hongfei Jiang, Yang Song View a PDF of the paper titled Quality-constrained Entropy Maximization Policy Optimization for LLM Diversity, by Haihui Pan and 5 other authors View PDF HTML (experimental) Abstract:Recent research indicates that while alignment methods significantly improve the quality of large language model(LLM) outputs, they simultaneously reduce the diversity of the models' output. Although some methods have been proposed to enhance LLM output diversity, they often come at the cost of reduced performance. In this work, we first theoretically demonstrate that the alignment task can be decomposed into two distributions: quality and diversity. To enhance the diversity of LLM outputs while ensuring quality, we propose the Quality-constrained Entropy Maximization Policy Optimization (QEMPO). QEMPO aims to maximize the output entropy of the policy while ensuring output quality. By adding different constraints to QEMPO, we obtain different policies. To optimize policies, we propose both online and offline training methods. Experiments validate that QEMPO achieves performance comparable to or even better than RLHF while improving output diversity. Subjects: Computation and Language (cs.CL); Machine Learning (cs.LG) Cite as: arXiv:2602.1...