[2603.29139] SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents
About this article
Abstract page for arXiv paper 2603.29139: SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents
Computer Science > Artificial Intelligence arXiv:2603.29139 (cs) [Submitted on 31 Mar 2026] Title:SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents Authors:Kuangshi Ai, Haichao Miao, Kaiyuan Tang, Nathaniel Gorski, Jianxin Sun, Guoxi Liu, Helgi I. Ingolfsson, David Lenz, Hanqi Guo, Hongfeng Yu, Teja Leburu, Michael Molash, Bei Wang, Tom Peterka, Chaoli Wang, Shusen Liu View a PDF of the paper titled SciVisAgentBench: A Benchmark for Evaluating Scientific Data Analysis and Visualization Agents, by Kuangshi Ai and 15 other authors View PDF HTML (experimental) Abstract:Recent advances in large language models (LLMs) have enabled agentic systems that translate natural language intent into executable scientific visualization (SciVis) tasks. Despite rapid progress, the community lacks a principled and reproducible benchmark for evaluating these emerging SciVis agents in realistic, multi-step analysis settings. We present SciVisAgentBench, a comprehensive and extensible benchmark for evaluating scientific data analysis and visualization agents. Our benchmark is grounded in a structured taxonomy spanning four dimensions: application domain, data type, complexity level, and visualization operation. It currently comprises 108 expert-crafted cases covering diverse SciVis scenarios. To enable reliable assessment, we introduce a multimodal outcome-centric evaluation pipeline that combines LLM-based judging with deterministic evaluators, incl...