[2511.07833] MURPHY: Multi-Turn GRPO for Self Correcting Code Generation
Summary
The paper presents MURPHY, a multi-turn reinforcement learning framework that enhances code generation by incorporating execution feedback, significantly improving performance over existing methods.
Why It Matters
MURPHY addresses limitations in current reinforcement learning techniques for code generation by enabling iterative decision-making. This advancement is crucial for developing more capable AI systems that can perform complex tasks, making it relevant for researchers and practitioners in machine learning and AI.
Key Takeaways
- MURPHY extends Group Relative Policy Optimization (GRPO) for multi-turn tasks.
- It integrates execution feedback directly into the training process.
- The framework achieves up to an 8% improvement in pass@1 metrics.
- MURPHY outperforms previous leading methods in multi-turn execution feedback.
- The approach is significant for enhancing reasoning capabilities in large language models.
Computer Science > Machine Learning arXiv:2511.07833 (cs) [Submitted on 11 Nov 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:MURPHY: Multi-Turn GRPO for Self Correcting Code Generation Authors:Chanakya Ekbote, Vijay Lingam, Sujay Sanghavi, Jun Huan, Behrooz Omidvar-Tehrani, Anoop Deoras, Stefano Soatto View a PDF of the paper titled MURPHY: Multi-Turn GRPO for Self Correcting Code Generation, by Chanakya Ekbote and 6 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning with Verifiable Rewards(RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce MURPHY, a multi-turn RLVR framework that incorporates execution feedback directly into training, extending GRPO to optimize over multi-turn trajectories where models iteratively refine solutions. MURPHY combines a feedback conditioned rollout tree with trajectory-level credit assignment, and uses pruning to reduce the cost of multi-turn optimization. Evaluations on code generation benchmarks with two model families show that MURPHY consistently improves multi-iteration performance, achieving up to an 8% absolute gain in pass@1 over compute-matched GRPO baselines, and outperforming the prior leading method that incorpor...