[2506.09434] When Is Diversity Rewarded in Cooperative Multi-Agent Learning?
About this article
Abstract page for arXiv paper 2506.09434: When Is Diversity Rewarded in Cooperative Multi-Agent Learning?
Computer Science > Multiagent Systems arXiv:2506.09434 (cs) [Submitted on 11 Jun 2025 (v1), last revised 1 Mar 2026 (this version, v4)] Title:When Is Diversity Rewarded in Cooperative Multi-Agent Learning? Authors:Michael Amir, Matteo Bettini, Amanda Prorok View a PDF of the paper titled When Is Diversity Rewarded in Cooperative Multi-Agent Learning?, by Michael Amir and 2 other authors View PDF HTML (experimental) Abstract:The success of teams in robotics, nature, and society often depends on the division of labor among diverse specialists; however, a principled explanation for when such diversity surpasses a homogeneous team is still missing. Focusing on multi-agent task allocation problems, we study this question from the perspective of reward design: what kinds of objectives are best suited for heterogeneous teams? We first consider an instantaneous, non-spatial setting where the global reward is built by two generalized aggregation operators: an inner operator that maps the $N$ agents' effort allocations on individual tasks to a task score, and an outer operator that merges the $M$ task scores into the global team reward. We prove that the curvature of these operators determines whether heterogeneity can increase reward, and that for broad reward families this collapses to a simple convexity test. Next, we ask what incentivizes heterogeneity to emerge when embodied, time-extended agents must learn an effort allocation policy. To study heterogeneity in such settings, w...