[2602.19594] ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?

[2602.19594] ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?

arXiv - Machine Learning 3 min read Article

Summary

ISO-Bench introduces a benchmark for coding agents to optimize real-world inference workloads, evaluating their performance against expert human solutions across various tasks.

Why It Matters

This research is significant as it addresses the limitations of existing benchmarks in evaluating coding agents. By combining execution-based and LLM-based metrics, it provides a more comprehensive assessment of coding agents' capabilities, which is crucial for advancing AI in software optimization.

Key Takeaways

  • ISO-Bench benchmarks coding agents on real-world inference tasks.
  • Combines hard and soft metrics for a comprehensive evaluation.
  • No single coding agent consistently outperforms others across tasks.
  • Agents often identify bottlenecks but struggle to implement solutions.
  • Scaffolding is as critical as the underlying model in agent performance.

Computer Science > Machine Learning arXiv:2602.19594 (cs) [Submitted on 23 Feb 2026] Title:ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads? Authors:Ayush Nangia, Shikhar Mishra, Aman Gokrani, Paras Chopra View a PDF of the paper titled ISO-Bench: Can Coding Agents Optimize Real-World Inference Workloads?, by Ayush Nangia and 3 other authors View PDF HTML (experimental) Abstract:We introduce ISO-Bench, a benchmark for coding agents to test their capabilities on real-world inference optimization tasks. These tasks were taken from vLLM and SGLang, two of the most popular LLM serving frameworks. Each task provides an agent with a codebase and bottleneck description, whereby the agent must produce an optimization patch evaluated against expert human solutions. We curated 54 tasks from merged pull requests with measurable performance improvements. While existing benchmarks heavily use runtime-based metrics, such approaches can be gamed to pass tests without capturing the actual intent of the code changes. Therefore, we combine both hard (execution-based) and soft (LLM-based) metrics to show that both are necessary for complete evaluation. While evaluating both closed and open-source coding agents, we find no single agent dominates across codebases. Surprisingly, agents often identify correct bottlenecks but fail to execute working solutions. We also show that agents with identical underlying models differ substantially, suggesting scaffolding is as important...

Related Articles

Llms

OpenClaw security checklist: practical safeguards for AI agents

Here is one of the better quality guides on the ensuring safety when deploying OpenClaw: https://chatgptguide.ai/openclaw-security-checkl...

Reddit - Artificial Intelligence · 1 min ·
I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge
Llms

I let Gemini in Google Maps plan my day and it went surprisingly well | The Verge

Gemini in Google Maps is a surprisingly useful way to explore new territory.

The Verge - AI · 11 min ·
Llms

The person who replaces you probably won't be AI. It'll be someone from the next department over who learned to use it - opinion/discussion

I'm a strategy person by background. Two years ago I'd write a recommendation and hand it to a product team. Now.. I describe what I want...

Reddit - Artificial Intelligence · 1 min ·
Block Resets Management With AI As Cash App Adds Installment Transfers
Llms

Block Resets Management With AI As Cash App Adds Installment Transfers

Block (NYSE:XYZ) plans a permanent organizational overhaul that replaces many middle management roles with AI-driven models to create fla...

AI Tools & Products · 5 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime