[2401.04536] Evaluating Language Model Agency through Negotiations
Summary
This paper introduces a novel method for evaluating language model agency through negotiation games, addressing limitations of existing benchmarks and testing various models' performance in complex interactions.
Why It Matters
Understanding language model agency is crucial as these models are increasingly used in real-world applications. This research provides insights into their capabilities and limitations, particularly in negotiation scenarios, which are common in human interactions. By evaluating models in a more realistic context, the findings can inform future developments in AI alignment and performance.
Key Takeaways
- Introduces negotiation games as a method to evaluate language model agency.
- Highlights that only closed-source models successfully completed negotiation tasks.
- Identifies cooperative bargaining as the most challenging scenario for models.
- Demonstrates that even powerful models can be outperformed by weaker opponents.
- Addresses issues of evaluation data leakage in traditional benchmarks.
Computer Science > Computation and Language arXiv:2401.04536 (cs) [Submitted on 9 Jan 2024 (v1), last revised 18 Feb 2026 (this version, v3)] Title:Evaluating Language Model Agency through Negotiations Authors:Tim R. Davidson, Veniamin Veselovsky, Martin Josifoski, Maxime Peyrard, Antoine Bosselut, Michal Kosinski, Robert West View a PDF of the paper titled Evaluating Language Model Agency through Negotiations, by Tim R. Davidson and 6 other authors View PDF HTML (experimental) Abstract:We introduce an approach to evaluate language model (LM) agency using negotiation games. This approach better reflects real-world use cases and addresses some of the shortcomings of alternative LM benchmarks. Negotiation games enable us to study multi-turn, and cross-model interactions, modulate complexity, and side-step accidental evaluation data leakage. We use our approach to test six widely used and publicly accessible LMs, evaluating performance and alignment in both self-play and cross-play settings. Noteworthy findings include: (i) only closed-source models tested here were able to complete these tasks; (ii) cooperative bargaining games proved to be most challenging to the models; and (iii) even the most powerful models sometimes "lose" to weaker opponents Comments: Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Machine Learning (cs.LG) Cite as: arXiv:2401.04536 [cs.CL] (or arXiv:2401.04536v3 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2...