[2603.02586] LiveAgentBench: Comprehensive Benchmarking of Agentic Systems Across 104 Real-World Challenges
About this article
Abstract page for arXiv paper 2603.02586: LiveAgentBench: Comprehensive Benchmarking of Agentic Systems Across 104 Real-World Challenges
Computer Science > Artificial Intelligence arXiv:2603.02586 (cs) [Submitted on 3 Mar 2026] Title:LiveAgentBench: Comprehensive Benchmarking of Agentic Systems Across 104 Real-World Challenges Authors:Hao Li, Huan Wang, Jinjie Gu, Wenjie Wang, Chenyi Zhuang, Sikang Bian View a PDF of the paper titled LiveAgentBench: Comprehensive Benchmarking of Agentic Systems Across 104 Real-World Challenges, by Hao Li and 5 other authors View PDF HTML (experimental) Abstract:As large language models grow more capable, general AI agents have become increasingly prevalent in practical applications. However, existing benchmarks face significant limitations, failing to represent real-world user tasks accurately. To address this gap, we present LiveAgentBench, a comprehensive benchmark with 104 scenarios that reflect real user requirements. It is constructed from publicly sourced questions on social media and real-world products. Central to our approach is the Social Perception-Driven Data Generation (SPDG) method, a novel process we developed to ensure each question's real-world relevance, task complexity, and result verifiability. We evaluate various models, frameworks, and commercial products using LiveAgentBench, revealing their practical performance and identifying areas for improvement. This release includes 374 tasks, with 125 for validation and 249 for testing. The SPDG process enables continuous updates with fresh queries from real-world interactions. Subjects: Artificial Intelligenc...