[2602.11348] AgentNoiseBench: Benchmarking Robustness of Tool-Using LLM Agents Under Noisy Condition

[2602.11348] AgentNoiseBench: Benchmarking Robustness of Tool-Using LLM Agents Under Noisy Condition

arXiv - AI 4 min read Article

Summary

The paper introduces AgentNoiseBench, a framework for evaluating the robustness of tool-using LLM agents under noisy conditions, highlighting performance variations in real-world scenarios.

Why It Matters

As LLMs are increasingly deployed in real-world applications, understanding their performance under noisy conditions is crucial. This research addresses a significant gap in existing evaluation methods, providing insights that can enhance the reliability of AI systems in unpredictable environments.

Key Takeaways

  • AgentNoiseBench evaluates LLM agents' robustness in noisy environments.
  • The study categorizes noise into user-noise and tool-noise, impacting agent performance.
  • Results indicate significant performance variations under different noise conditions.
  • The framework allows for automated noise injection into benchmarks, preserving task solvability.
  • Insights from this research can inform the development of more resilient AI systems.

Computer Science > Artificial Intelligence arXiv:2602.11348 (cs) [Submitted on 11 Feb 2026 (v1), last revised 18 Feb 2026 (this version, v2)] Title:AgentNoiseBench: Benchmarking Robustness of Tool-Using LLM Agents Under Noisy Condition Authors:Ruipeng Wang, Yuxin Chen, Yukai Wang, Chang Wu, Junfeng Fang, Xiaodong Cai, Qi Gu, Hui Su, An Zhang, Xiang Wang, Xunliang Cai, Tat-Seng Chua View a PDF of the paper titled AgentNoiseBench: Benchmarking Robustness of Tool-Using LLM Agents Under Noisy Condition, by Ruipeng Wang and 11 other authors View PDF HTML (experimental) Abstract:Recent advances in large language models have enabled LLM-based agents to achieve strong performance on a variety of benchmarks. However, their performance in real-world deployments often that observed on benchmark settings, especially in complex and imperfect environments. This discrepancy largely arises because prevailing training and evaluation paradigms are typically built on idealized assumptions, overlooking the inherent stochasticity and noise present in real-world interactions. To bridge this gap, we introduce AgentNoiseBench, a framework for systematically evaluating the robustness of agentic models under noisy environments. We first conduct an in-depth analysis of biases and uncertainties in real-world scenarios and categorize environmental noise into two primary types: user-noise and tool-noise. Building on this analysis, we develop an automated pipeline that injects controllable noise into ex...

Related Articles

Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
Llms

Why would Claude give me the same response over and over and give others different replies?

I asked Claude to "generate me a random word" so I could do some word play. Then I asked it again in a new prompt window on desktop after...

Reddit - Artificial Intelligence · 1 min ·
Anthropic blocks OpenClaw from Claude subscriptions
Llms

Anthropic blocks OpenClaw from Claude subscriptions

Anthropic forces pay-as-you-go pricing for OpenClaw users after creator joins OpenAI

AI Tools & Products · 6 min ·
Llms

wtf bro did what? arc 3 2026

The Physarum Explorer is a high-speed, bio-inspired neural model designed specifically for ARC geometry. Here is the snapshot of its curr...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime