[2602.12316] GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory

[2602.12316] GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory

arXiv - AI 3 min read Article

Summary

GT-HarmBench introduces a benchmark for evaluating AI safety risks in multi-agent environments, highlighting significant reliability gaps in current AI models.

Why It Matters

As AI systems become more prevalent in complex, high-stakes scenarios, understanding their safety risks is crucial. This benchmark provides a standardized method to assess multi-agent interactions, which are often overlooked in existing evaluations, thereby enhancing AI alignment and safety measures.

Key Takeaways

  • GT-HarmBench evaluates 2,009 scenarios to assess AI safety in multi-agent contexts.
  • Current AI models only achieve socially beneficial outcomes 62% of the time.
  • Game-theoretic interventions can improve positive outcomes by up to 18%.
  • The benchmark addresses gaps in understanding coordination failures and conflicts.
  • This tool aims to standardize testing for AI alignment in complex environments.

Computer Science > Artificial Intelligence arXiv:2602.12316 (cs) [Submitted on 12 Feb 2026] Title:GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory Authors:Pepijn Cobben, Xuanqiang Angelo Huang, Thao Amelia Pham, Isabel Dahlgren, Terry Jingchen Zhang, Zhijing Jin View a PDF of the paper titled GT-HarmBench: Benchmarking AI Safety Risks Through the Lens of Game Theory, by Pepijn Cobben and 5 other authors View PDF HTML (experimental) Abstract:Frontier AI systems are increasingly capable and deployed in high-stakes multi-agent environments. However, existing AI safety benchmarks largely evaluate single agents, leaving multi-agent risks such as coordination failure and conflict poorly understood. We introduce GT-HarmBench, a benchmark of 2,009 high-stakes scenarios spanning game-theoretic structures such as the Prisoner's Dilemma, Stag Hunt and Chicken. Scenarios are drawn from realistic AI risk contexts in the MIT AI Risk Repository. Across 15 frontier models, agents choose socially beneficial actions in only 62% of cases, frequently leading to harmful outcomes. We measure sensitivity to game-theoretic prompt framing and ordering, and analyze reasoning patterns driving failures. We further show that game-theoretic interventions improve socially beneficial outcomes by up to 18%. Our results highlight substantial reliability gaps and provide a broad standardized testbed for studying alignment in multi-agent environments. The benchmark and code are avai...

Related Articles

[2601.15356] Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing
Llms

[2601.15356] Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

Abstract page for arXiv paper 2601.15356: Q-Probe: Scaling Image Quality Assessment to High Resolution via Context-Aware Agentic Probing

arXiv - AI · 4 min ·
[2510.18196] Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge
Llms

[2510.18196] Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

Abstract page for arXiv paper 2510.18196: Contrastive Decoding Mitigates Score Range Bias in LLM-as-a-Judge

arXiv - AI · 3 min ·
[2509.23435] AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models
Llms

[2509.23435] AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models

Abstract page for arXiv paper 2509.23435: AudioRole: An Audio Dataset for Character Role-Playing in Large Language Models

arXiv - AI · 4 min ·
[2604.07007] AgentCity: Constitutional Governance for Autonomous Agent Economies via Separation of Power
Robotics

[2604.07007] AgentCity: Constitutional Governance for Autonomous Agent Economies via Separation of Power

Abstract page for arXiv paper 2604.07007: AgentCity: Constitutional Governance for Autonomous Agent Economies via Separation of Power

arXiv - AI · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime