[2511.07833] MURPHY: Multi-Turn GRPO for Self Correcting Code Generation

[2511.07833] MURPHY: Multi-Turn GRPO for Self Correcting Code Generation

arXiv - AI 3 min read Article

Summary

The paper presents MURPHY, a multi-turn reinforcement learning framework that enhances code generation by incorporating execution feedback, significantly improving performance over existing methods.

Why It Matters

MURPHY addresses limitations in current reinforcement learning techniques for code generation by enabling iterative decision-making. This advancement is crucial for developing more capable AI systems that can perform complex tasks, making it relevant for researchers and practitioners in machine learning and AI.

Key Takeaways

  • MURPHY extends Group Relative Policy Optimization (GRPO) for multi-turn tasks.
  • It integrates execution feedback directly into the training process.
  • The framework achieves up to an 8% improvement in pass@1 metrics.
  • MURPHY outperforms previous leading methods in multi-turn execution feedback.
  • The approach is significant for enhancing reasoning capabilities in large language models.

Computer Science > Machine Learning arXiv:2511.07833 (cs) [Submitted on 11 Nov 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:MURPHY: Multi-Turn GRPO for Self Correcting Code Generation Authors:Chanakya Ekbote, Vijay Lingam, Sujay Sanghavi, Jun Huan, Behrooz Omidvar-Tehrani, Anoop Deoras, Stefano Soatto View a PDF of the paper titled MURPHY: Multi-Turn GRPO for Self Correcting Code Generation, by Chanakya Ekbote and 6 other authors View PDF HTML (experimental) Abstract:Reinforcement Learning with Verifiable Rewards(RLVR) has emerged as a powerful framework for enhancing the reasoning capabilities of large language models (LLMs). However, existing approaches such as Group Relative Policy Optimization (GRPO) and its variants, while effective on reasoning benchmarks, struggle with agentic tasks that require iterative decision-making. We introduce MURPHY, a multi-turn RLVR framework that incorporates execution feedback directly into training, extending GRPO to optimize over multi-turn trajectories where models iteratively refine solutions. MURPHY combines a feedback conditioned rollout tree with trajectory-level credit assignment, and uses pruning to reduce the cost of multi-turn optimization. Evaluations on code generation benchmarks with two model families show that MURPHY consistently improves multi-iteration performance, achieving up to an 8% absolute gain in pass@1 over compute-matched GRPO baselines, and outperforming the prior leading method that incorpor...

Related Articles

Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News
Llms

Anthropic Claude AI training model targets AI skills gap | ETIH EdTech News

AI in education, edtech AI tools, and AI skills training drive Anthropic’s Claude curriculum. ETIH edtech news covers how AI fluency, wor...

AI Tools & Products · 6 min ·
I use ChatGPT every day — I stick to these 3 rules to protect my privacy
Llms

I use ChatGPT every day — I stick to these 3 rules to protect my privacy

I stick to three essential rules whenever I open up a new chat in ChatGPT to always protect my privacy and keep my data secure

AI Tools & Products · 9 min ·
Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute
Llms

Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute

AI Tools & Products · 3 min ·
Llms

Codex and Claude Code Can Work Together

AI Tools & Products ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime