[2510.23642] VisCoder2: Building Multi-Language Visualization Coding Agents
About this article
Abstract page for arXiv paper 2510.23642: VisCoder2: Building Multi-Language Visualization Coding Agents
Computer Science > Software Engineering arXiv:2510.23642 (cs) [Submitted on 24 Oct 2025 (v1), last revised 7 Apr 2026 (this version, v2)] Title:VisCoder2: Building Multi-Language Visualization Coding Agents Authors:Yuansheng Ni, Songcheng Cai, Xiangchao Chen, Jiarong Liang, Zhiheng Lyu, Jiaqi Deng, Kai Zou, Ping Nie, Fei Yuan, Xiang Yue, Wenhu Chen View a PDF of the paper titled VisCoder2: Building Multi-Language Visualization Coding Agents, by Yuansheng Ni and 10 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) have recently enabled coding agents capable of generating, executing, and revising visualization code. However, existing models often fail in practical workflows due to limited language coverage, unreliable execution, and lack of iterative correction mechanisms. Progress has been constrained by narrow datasets and benchmarks that emphasize single-round generation and single-language tasks. To address these challenges, we introduce three complementary resources for advancing visualization coding agents. VisCode-Multi-679K is a large-scale, supervised dataset containing 679K validated and executable visualization samples with multi-turn correction dialogues across 12 programming languages. VisPlotBench is a benchmark for systematic evaluation, featuring executable tasks, rendered outputs, and protocols for both initial generation and multi-round self-debug. Finally, we present VisCoder2, a family of multi-language visualization models ...