[2604.04192] Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks
About this article
Abstract page for arXiv paper 2604.04192: Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks
Computer Science > Computer Vision and Pattern Recognition arXiv:2604.04192 (cs) [Submitted on 5 Apr 2026] Title:Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks Authors:Adrienne Deganutti, Elad Hirsch, Haonan Zhu, Jaejung Seol, Purvanshi Mehta View a PDF of the paper titled Graphic-Design-Bench: A Comprehensive Benchmark for Evaluating AI on Graphic Design Tasks, by Adrienne Deganutti and 4 other authors View PDF HTML (experimental) Abstract:We introduce GraphicDesignBench (GDB), the first comprehensive benchmark suite designed specifically to evaluate AI models on the full breadth of professional graphic design tasks. Unlike existing benchmarks that focus on natural-image understanding or generic text-to-image synthesis, GDB targets the unique challenges of professional design work: translating communicative intent into structured layouts, rendering typographically faithful text, manipulating layered compositions, producing valid vector graphics, and reasoning about animation. The suite comprises 50 tasks organized along five axes: layout, typography, infographics, template & design semantics and animation, each evaluated under both understanding and generation settings, and grounded in real-world design templates drawn from the LICA layered-composition dataset. We evaluate a set of frontier closed-source models using a standardized metric taxonomy covering spatial accuracy, perceptual quality, text fidelity, semantic alignment, a...