[2602.12316] GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory
Summary
GT-HarmBench introduces a benchmark for evaluating AI safety risks in multi-agent environments, highlighting significant reliability gaps in current AI models.
Why It Matters
As AI systems become more prevalent in complex, high-stakes scenarios, understanding their safety risks is crucial. This benchmark provides a standardized method to assess multi-agent interactions, which are often overlooked in existing evaluations, thereby enhancing AI alignment and safety measures.
Key Takeaways
- GT-HarmBench evaluates 2,009 scenarios to assess AI safety in multi-agent contexts.
- Current AI models only achieve socially beneficial outcomes 62% of the time.
- Game-theoretic interventions can improve positive outcomes by up to 18%.
- The benchmark addresses gaps in understanding coordination failures and conflicts.
- This tool aims to standardize testing for AI alignment in complex environments.
Computer Science > Artificial Intelligence arXiv:2602.12316 (cs) [Submitted on 12 Feb 2026] Title:GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory Authors:Pepijn Cobben, Xuanqiang Angelo Huang, Thao Amelia Pham, Isabel Dahlgren, Terry Jingchen Zhang, Zhijing Jin View a PDF of the paper titled GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory, by Pepijn Cobben and 5 other authors View PDF HTML (experimental) Abstract:Frontier AI systems are increasingly capable and deployed in high-stakes multi-agent environments. However, existing AI safety benchmarks largely evaluate single agents, leaving multi-agent risks such as coordination failure and conflict poorly understood. We introduce GT-HarmBench, a benchmark of 2,009 high-stakes scenarios spanning game-theoretic structures such as the Prisoner's Dilemma, Stag Hunt and Chicken. Scenarios are drawn from realistic AI risk contexts in the MIT AI Risk Repository. Across 15 frontier models, agents choose socially beneficial actions in only 62% of cases, frequently leading to harmful outcomes. We measure sensitivity to game-theoretic prompt framing and ordering, and analyze reasoning patterns driving failures. We further show that game-theoretic interventions improve socially beneficial outcomes by up to 18%. Our results highlight substantial reliability gaps and provide a broad standardized testbed for studying alignment in multi-agent environments. The benchmark and code are avai...