[2603.03072] TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning
About this article
Abstract page for arXiv paper 2603.03072: TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2603.03072 (cs) [Submitted on 3 Mar 2026] Title:TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning Authors:Christian Greisinger, Steffen Eger View a PDF of the paper titled TikZilla: Scaling Text-to-TikZ with High-Quality Data and Reinforcement Learning, by Christian Greisinger and 1 other authors View PDF Abstract:Large language models (LLMs) are increasingly used to assist scientists across diverse workflows. A key challenge is generating high-quality figures from textual descriptions, often represented as TikZ programs that can be rendered as scientific images. Prior research has proposed a variety of datasets and modeling approaches for this task. However, existing datasets for Text-to-TikZ are too small and noisy to capture the complexity of TikZ, causing mismatches between text and rendered figures. Moreover, prior approaches rely solely on supervised fine-tuning (SFT), which does not expose the model to the rendered semantics of the figure, often resulting in errors such as looping, irrelevant content, and incorrect spatial relations. To address these issues, we construct DaTikZ-V4, a dataset more than four times larger and substantially higher in quality than DaTikZ-V3, enriched with LLM-generated figure descriptions. Using this dataset, we train TikZilla, a family of small open-source Qwen models (3B and 8B) with a two-stage pipeline of SFT followed by reinforcement learning (RL). For R...