[2603.01409] MIST-RL: Mutation-based Incremental Suite Testing via Reinforcement Learning
About this article
Abstract page for arXiv paper 2603.01409: MIST-RL: Mutation-based Incremental Suite Testing via Reinforcement Learning
Computer Science > Artificial Intelligence arXiv:2603.01409 (cs) [Submitted on 2 Mar 2026] Title:MIST-RL: Mutation-based Incremental Suite Testing via Reinforcement Learning Authors:Sicheng Zhu, Jiajun Wang, Jiawei Ai, Xin Li View a PDF of the paper titled MIST-RL: Mutation-based Incremental Suite Testing via Reinforcement Learning, by Sicheng Zhu and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) often fail to generate correct code on the first attempt, which requires using generated unit tests as verifiers to validate the solutions. Despite the success of recent verification methods, they remain constrained by a "scaling-by-quantity" paradigm. This brute-force approach suffers from a critical limitation: it yields diminishing returns in fault detection while causing severe test redundancy. To address this, we propose MIST-RL (Mutation-based Incremental Suite Testing via Reinforcement Learning), a framework that shifts the focus to "scaling-by-utility". We formulate test generation as a sequential decision process optimized via Group Relative Policy Optimization (GRPO). Specifically, we introduce a novel incremental mutation reward combined with dynamic penalties, which incentivizes the model to discover new faults while it suppresses functionally equivalent assertions. Experiments on HumanEval+ and MBPP+ demonstrate that MIST-RL outperforms state-of-the-art baselines. It achieves a +28.5% higher mutation score while reducing the number...