[2602.20117] ReSyn: Autonomously Scaling Synthetic Environments for Reasoning Models
Summary
The paper presents ReSyn, a novel pipeline for autonomously generating diverse synthetic environments for training reasoning language models, enhancing their performance on various reasoning tasks.
Why It Matters
ReSyn addresses the limitations of current synthetic data generation methods by providing a scalable solution that improves the training of reasoning language models. This advancement is significant for AI development, particularly in enhancing model reasoning capabilities across diverse tasks, which can lead to better applications in real-world scenarios.
Key Takeaways
- ReSyn introduces a scalable pipeline for generating reasoning environments.
- The method enhances reasoning language models' performance on benchmarks.
- Verifier-based supervision and task diversity are crucial for model improvement.
- The approach demonstrates a 27% relative improvement on challenging benchmarks.
- ReSyn can potentially transform how AI models are trained for reasoning tasks.
Computer Science > Artificial Intelligence arXiv:2602.20117 (cs) [Submitted on 23 Feb 2026] Title:ReSyn: Autonomously Scaling Synthetic Environments for Reasoning Models Authors:Andre He, Nathaniel Weir, Kaj Bostrom, Allen Nie, Darion Cassel, Sam Bayless, Huzefa Rangwala View a PDF of the paper titled ReSyn: Autonomously Scaling Synthetic Environments for Reasoning Models, by Andre He and 6 other authors View PDF HTML (experimental) Abstract:Reinforcement learning with verifiable rewards (RLVR) has emerged as a promising approach for training reasoning language models (RLMs) by leveraging supervision from verifiers. Although verifier implementation is easier than solution annotation for many tasks, existing synthetic data generation methods remain largely solution-centric, while verifier-based methods rely on a few hand-crafted procedural environments. In this work, we scale RLVR by introducing ReSyn, a pipeline that generates diverse reasoning environments equipped with instance generators and verifiers, covering tasks such as constraint satisfaction, algorithmic puzzles, and spatial reasoning. A Qwen2.5-7B-Instruct model trained with RL on ReSyn data achieves consistent gains across reasoning benchmarks and out-of-domain math benchmarks, including a 27\% relative improvement on the challenging BBEH benchmark. Ablations show that verifier-based supervision and increased task diversity both contribute significantly, providing empirical evidence that generating reasoning en...