[2602.16069] The Limits of Long-Context Reasoning in Automated Bug Fixing
Summary
This paper evaluates the limitations of long-context reasoning in automated bug fixing using large language models (LLMs), revealing significant performance degradation with increased context lengths.
Why It Matters
As LLMs are increasingly used in software engineering, understanding their limitations in reasoning over long contexts is crucial for developing more effective debugging tools. This research highlights the gap between theoretical capabilities and practical performance, guiding future improvements in LLM applications.
Key Takeaways
- LLMs show improved performance in short-context debugging but struggle with long-context reasoning.
- Agentic workflows enhance debugging success rates, but effective reasoning is limited to shorter contexts.
- Performance declines sharply when context length exceeds 20k tokens, indicating a critical threshold for LLMs.
- Qualitative analysis reveals common failure modes in long-context tasks, such as hallucinations and incorrect outputs.
- Current benchmarks may not accurately assess long-context reasoning capabilities in LLMs.
Computer Science > Software Engineering arXiv:2602.16069 (cs) [Submitted on 17 Feb 2026] Title:The Limits of Long-Context Reasoning in Automated Bug Fixing Authors:Ravi Raju, Mengmeng Ji, Shubhangi Upasani, Bo Li, Urmish Thakker View a PDF of the paper titled The Limits of Long-Context Reasoning in Automated Bug Fixing, by Ravi Raju and 4 other authors View PDF HTML (experimental) Abstract:Rapidly increasing context lengths have led to the assumption that large language models (LLMs) can directly reason over entire codebases. Concurrently, recent advances in LLMs have enabled strong performance on software engineering benchmarks, particularly when paired with agentic workflows. In this work, we systematically evaluate whether current LLMs can reliably perform long-context code debugging and patch generation. Using SWE-bench Verified as a controlled experimental setting, we first evaluate state-of-the-art models within an agentic harness (mini-SWE-agent), where performance improves substantially: GPT-5-nano achieves up to a 31\% resolve rate on 100 samples, and open-source models such as Deepseek-R1-0528 obtain competitive results. However, token-level analysis shows that successful agentic trajectories typically remain under 20k tokens, and that longer accumulated contexts correlate with lower success rates, indicating that agentic success primarily arises from task decomposition into short-context steps rather than effective long-context reasoning. To directly test long-c...