[2602.19594] ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?
Summary
ISO-Bench introduces a benchmark for coding agents to optimize real-world inference workloads, evaluating their performance against expert human solutions across various tasks.
Why It Matters
This research is significant as it addresses the limitations of existing benchmarks in evaluating coding agents. By combining execution-based and LLM-based metrics, it provides a more comprehensive assessment of coding agents' capabilities, which is crucial for advancing AI in software optimization.
Key Takeaways
- ISO-Bench benchmarks coding agents on real-world inference tasks.
- Combines hard and soft metrics for a comprehensive evaluation.
- No single coding agent consistently outperforms others across tasks.
- Agents often identify bottlenecks but struggle to implement solutions.
- Scaffolding is as critical as the underlying model in agent performance.
Computer Science > Machine Learning arXiv:2602.19594 (cs) [Submitted on 23 Feb 2026] Title:ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads? Authors:Ayush Nangia, Shikhar Mishra, Aman Gokrani, Paras Chopra View a PDF of the paper titled ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?, by Ayush Nangia and 3 other authors View PDF HTML (experimental) Abstract:We introduce ISO-Bench, a benchmark for coding agents to test their capabilities on real-world inference optimization tasks. These tasks were taken from vLLM and SGLang, two of the most popular LLM serving frameworks. Each task provides an agent with a codebase and bottleneck description, whereby the agent must produce an optimization patch evaluated against expert human solutions. We curated 54 tasks from merged pull requests with measurable performance improvements. While existing benchmarks heavily use runtime-based metrics, such approaches can be gamed to pass tests without capturing the actual intent of the code changes. Therefore, we combine both hard (execution-based) and soft (LLM-based) metrics to show that both are necessary for complete evaluation. While evaluating both closed and open-source coding agents, we find no single agent dominates across codebases. Surprisingly, agents often identify correct bottlenecks but fail to execute working solutions. We also show that agents with identical underlying models differ substantially, suggesting scaffolding is as important...