[2505.20139] StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs
About this article
Abstract page for arXiv paper 2505.20139: StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs
Computer Science > Software Engineering arXiv:2505.20139 (cs) [Submitted on 26 May 2025 (v1), last revised 2 Apr 2026 (this version, v3)] Title:StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs Authors:Jialin Yang, Dongfu Jiang, Lipeng He, Sherman Siu, Yuxuan Zhang, Disen Liao, Zhuofeng Li, Huaye Zeng, Yiming Jia, Haozhe Wang, Benjamin Schneider, Chi Ruan, Wentao Ma, Zhiheng Lyu, Yifei Wang, Yi Lu, Quy Duc Do, Ziyan Jiang, Ping Nie, Wenhu Chen View a PDF of the paper titled StructEval: Benchmarking LLMs' Capabilities to Generate Structural Outputs, by Jialin Yang and 19 other authors View PDF HTML (experimental) Abstract:As Large Language Models (LLMs) become integral to software development workflows, their ability to generate structured outputs has become critically important. We introduce StructEval, a comprehensive benchmark for evaluating LLMs' capabilities in producing both non-renderable (JSON, YAML, CSV) and renderable (HTML, React, SVG) structured formats. Unlike prior benchmarks, StructEval systematically evaluates structural fidelity across diverse formats through two paradigms: 1) generation tasks, producing structured output from natural language prompts, and \textbf{2)} conversion tasks, translating between structured formats. Our benchmark encompasses 18 formats and 44 types of task, with novel metrics for format adherence and structural correctness. Results reveal significant performance gaps-even state-of-the-art models like o1-min...