[2603.29871] ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training
About this article
Abstract page for arXiv paper 2603.29871: ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training
Computer Science > Artificial Intelligence arXiv:2603.29871 (cs) [Submitted on 31 Mar 2026] Title:ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training Authors:Rui Ai, Yu Pan, David Simchi-Levi, Chonghuan Wang View a PDF of the paper titled ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training, by Rui Ai and 3 other authors View PDF HTML (experimental) Abstract:In user-agent interaction scenarios such as recommendation, brainstorming, and code suggestion, Large Language Models (LLMs) often generate sets of candidate recommendations where the objective is to maximize the collective utility of the entire set rather than individual candidates independently. However, existing reinforcement learning post-training paradigms, such as Group Relative Policy Optimization (GRPO), typically assign the same set-level scalar reward to every candidate in the set. This leads to noisy training signals where poor candidates free-ride on the high reward produced by a single strong peer, resulting in suboptimal exploration. To address this, we propose Shapley-Enhanced GRPO (ShapE-GRPO). By leveraging the permutation-invariant nature of set-level utility, we derive a Shapley-enhanced formulation from cooperative game theory to decompose set-level rewards into granular, candidate-specific signals. We show that our formulation preserves the fundamental axioms of the Shapley value while remaining computationally efficient with polynomial-time co...