[2602.15407] Fairness over Equality: Correcting Social Incentives in Asymmetric Sequential Social Dilemmas
Summary
This paper explores how asymmetric conditions in Sequential Social Dilemmas affect cooperation dynamics in Multi-Agent Reinforcement Learning, proposing modifications to enhance fairness and cooperation.
Why It Matters
The research addresses a critical gap in understanding cooperation in environments where agents have differing incentives. By redefining fairness and introducing new mechanisms, it provides insights that could improve collaborative strategies in AI systems, making them more effective in real-world applications.
Key Takeaways
- Asymmetric conditions significantly influence cooperation dynamics in Sequential Social Dilemmas.
- Existing fairness-based methods often fail under asymmetrical conditions, leading to unintended defection incentives.
- The proposed modifications redefine fairness, incorporate agent-based weighting, and localize social feedback.
- Experimental results indicate that the new approach fosters faster cooperative policy emergence without sacrificing scalability.
- This research enhances the understanding of cooperation in multi-agent systems with varying incentives.
Computer Science > Machine Learning arXiv:2602.15407 (cs) [Submitted on 17 Feb 2026] Title:Fairness over Equality: Correcting Social Incentives in Asymmetric Sequential Social Dilemmas Authors:Alper Demir, Hüseyin Aydın, Kale-ab Abebe Tessera, David Abel, Stefano V. Albrecht View a PDF of the paper titled Fairness over Equality: Correcting Social Incentives in Asymmetric Sequential Social Dilemmas, by Alper Demir and 4 other authors View PDF HTML (experimental) Abstract:Sequential Social Dilemmas (SSDs) provide a key framework for studying how cooperation emerges when individual incentives conflict with collective welfare. In Multi-Agent Reinforcement Learning, these problems are often addressed by incorporating intrinsic drives that encourage prosocial or fair behavior. However, most existing methods assume that agents face identical incentives in the dilemma and require continuous access to global information about other agents to assess fairness. In this work, we introduce asymmetric variants of well-known SSD environments and examine how natural differences between agents influence cooperation dynamics. Our findings reveal that existing fairness-based methods struggle to adapt under asymmetric conditions by enforcing raw equality that wrongfully incentivize defection. To address this, we propose three modifications: (i) redefining fairness by accounting for agents' reward ranges, (ii) introducing an agent-based weighting mechanism to better handle inherent asymmetries,...