[2603.21465] DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation
About this article
Abstract page for arXiv paper 2603.21465: DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation
Computer Science > Computation and Language arXiv:2603.21465 (cs) [Submitted on 23 Mar 2026] Title:DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation Authors:Siqi Guo, Ming Lin, Tianbao Yang View a PDF of the paper titled DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation, by Siqi Guo and 2 other authors View PDF HTML (experimental) Abstract:Developing efficient CUDA kernels is a fundamental yet challenging task in the generative AI industry. Recent researches leverage Large Language Models (LLMs) to automatically convert PyTorch reference implementations to CUDA kernels, significantly reducing the engineering efforts. State-of-the-art LLMs, such as GPT-5.2 and Claude-Sonnet-4.5, still struggle in this specific task. To address this challenge, we propose DRTriton, a scalable learning framework for training LLMs to convert PyTorch codes into highly optimized Triton kernels, which are then compiled to CUDA kernels at runtime. DRTriton consists of three key components: (i) a data synthetic algorithm CSP-DAG that guarantees full coverage and unbiased uniform sampling over the operator space with controlled difficulty; (ii) a curriculum reinforcement learning with decoupled reward efficiently optimizes conversion success rate and inference speed simultaneously; and (iii) a test-time search algorithm that further improves the inference speed of the generated Triton kernels. Notably, despite being trained...