[2507.19364] Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
About this article
Abstract page for arXiv paper 2507.19364: Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Computer Science > Artificial Intelligence arXiv:2507.19364 (cs) [Submitted on 25 Jul 2025 (v1), last revised 27 Feb 2026 (this version, v2)] Title:Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges Authors:Patrick Taillandier, Jean Daniel Zucker, Arnaud Grignard, Benoit Gaudou, Nghi Quang Huynh, Alexis Drogoul View a PDF of the paper titled Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges, by Patrick Taillandier and 5 other authors View PDF HTML (experimental) Abstract:This position paper examines the use of Large Language Models (LLMs) in social simulation, analyzing their potential and limitations from a computational social science perspective. We first review recent findings on LLMs' ability to replicate key aspects of human cognition, including Theory of Mind reasoning and social inference, while identifying persistent limitations such as cognitive biases, lack of grounded understanding, and behavioral inconsistencies. We then survey emerging applications of LLMs in multi-agent simulation frameworks, examining system architectures, scalability, and validation strategies. Projects such as Generative Agents (Smallville) and AgentSociety are analyzed with respect to their empirical grounding and methodological design. Particular attention is given to the challenges of behavioral fidelity, calibration, and reproducibility in large-scale LLM-driven simulations. Finally, we distinguish between contexts where LLM-bas...