[2411.18195] Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance
About this article
Abstract page for arXiv paper 2411.18195: Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance
Computer Science > Machine Learning arXiv:2411.18195 (cs) [Submitted on 27 Nov 2024 (v1), last revised 26 Mar 2026 (this version, v2)] Title:Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance Authors:Dimitris Michailidis, Willem Röpke, Diederik M. Roijers, Sennay Ghebreab, Fernando P. Santos View a PDF of the paper titled Scalable Multi-Objective Reinforcement Learning with Fairness Guarantees using Lorenz Dominance, by Dimitris Michailidis and 4 other authors View PDF HTML (experimental) Abstract:Multi-Objective Reinforcement Learning (MORL) aims to learn a set of policies that optimize trade-offs between multiple, often conflicting objectives. MORL is computationally more complex than single-objective RL, particularly as the number of objectives increases. Additionally, when objectives involve the preferences of agents or groups, incorporating fairness becomes both important and socially desirable. This paper introduces a principled algorithm that incorporates fairness into MORL while improving scalability to many-objective problems. We propose using Lorenz dominance to identify policies with equitable reward distributions and introduce lambda-Lorenz dominance to enable flexible fairness preferences. We release a new, large-scale real-world transport planning environment and demonstrate that our method encourages the discovery of fair policies, showing improved scalability in two large cities (Xi'an and Amsterdam). Our methods...