[2507.14529] Kernel Based Maximum Entropy Inverse Reinforcement Learning for Mean-Field Games
About this article
Abstract page for arXiv paper 2507.14529: Kernel Based Maximum Entropy Inverse Reinforcement Learning for Mean-Field Games
Computer Science > Machine Learning arXiv:2507.14529 (cs) [Submitted on 19 Jul 2025 (v1), last revised 5 Mar 2026 (this version, v2)] Title:Kernel Based Maximum Entropy Inverse Reinforcement Learning for Mean-Field Games Authors:Berkay Anahtarci, Can Deha Kariksiz, Naci Saldi View a PDF of the paper titled Kernel Based Maximum Entropy Inverse Reinforcement Learning for Mean-Field Games, by Berkay Anahtarci and 2 other authors View PDF HTML (experimental) Abstract:We consider the maximum causal entropy inverse reinforcement learning (IRL) problem for infinite-horizon stationary mean-field games (MFG), in which we model the unknown reward function within a reproducing kernel Hilbert space (RKHS). This allows the inference of rich and potentially nonlinear reward structures directly from expert demonstrations, in contrast to most existing approaches for MFGs that typically restrict the reward to a linear combination of a fixed finite set of basis functions and rely on finite-horizon formulations. We introduce a Lagrangian relaxation that enables us to reformulate the problem as an unconstrained log-likelihood maximization and obtain a solution via a gradient ascent algorithm. To establish the theoretical consistency of the algorithm, we prove the smoothness of the log-likelihood objective through the Fréchet differentiability of the related soft Bellman operators with respect to the parameters in the RKHS. To illustrate the practical advantages of the RKHS formulation, we val...