[2602.22769] AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications

[2602.22769] AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications

arXiv - Machine Learning 4 min read Article

Summary

The paper introduces AMA-Bench, a new benchmark for evaluating long-horizon memory in Large Language Models (LLMs) for agentic applications, highlighting the limitations of existing evaluation standards.

Why It Matters

As LLMs are increasingly used in complex, autonomous applications, effective long-horizon memory is crucial for their performance. AMA-Bench addresses the gap in current evaluation methods, providing a more accurate assessment of memory systems in real-world scenarios.

Key Takeaways

  • AMA-Bench evaluates long-horizon memory for LLMs in agentic applications.
  • Current benchmarks focus too much on dialogue, neglecting real-world interactions.
  • AMA-Agent, a new memory system, outperforms existing baselines by 11.16% on AMA-Bench.
  • The study reveals that existing memory systems often lack causality and objective information.
  • The paper proposes a causality graph and tool-augmented retrieval to enhance memory performance.

Computer Science > Artificial Intelligence arXiv:2602.22769 (cs) [Submitted on 26 Feb 2026] Title:AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications Authors:Yujie Zhao, Boqin Yuan, Junbo Huang, Haocheng Yuan, Zhongming Yu, Haozhou Xu, Lanxiang Hu, Abhilash Shankarampeta, Zimeng Huang, Wentao Ni, Yuandong Tian, Jishen Zhao View a PDF of the paper titled AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications, by Yujie Zhao and 11 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are deployed as autonomous agents in increasingly complex applications, where enabling long-horizon memory is critical for achieving strong performance. However, a significant gap exists between practical applications and current evaluation standards for agent memory: existing benchmarks primarily focus on dialogue-centric, human-agent interactions. In reality, agent memory consists of a continuous stream of agent-environment interactions that are primarily composed of machine-generated representations. To bridge this gap, we introduce AMA-Bench (Agent Memory with Any length), which evaluates long-horizon memory for LLMs in real agentic applications. It features two key components: (1) a set of real-world agentic trajectories across representative agentic applications, paired with expert-curated QA, and (2) a set of synthetic agentic trajectories that scale to arbitrary horizons, paired with rule-based QA. Our comprehensive study shows that e...

Related Articles

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED
Llms

I Asked ChatGPT 500 Questions. Here Are the Ads I Saw Most Often | WIRED

Ads are rolling out across the US on ChatGPT’s free tier. I asked OpenAI's bot 500 questions to see what these ads were like and how they...

Wired - AI · 9 min ·
Llms

Abacus.Ai Claw LLM consumes an incredible amount of credit without any usage :(

Three days ago, I clicked the "Deploy OpenClaw In Seconds" button to get an overview of the new service, but I didn't build any automatio...

Reddit - Artificial Intelligence · 1 min ·
Google’s Gemini AI app debuts in Hong Kong
Llms

Google’s Gemini AI app debuts in Hong Kong

Tech giant’s chatbot service tops Apple’s app store chart in the city.

AI Tools & Products · 2 min ·
Google Launches Gemini Import Tools to Poach Users From Rival AI Apps
Llms

Google Launches Gemini Import Tools to Poach Users From Rival AI Apps

Anyone looking to switch their AI assistant will find it surprisingly easy, as it only takes a few steps to move from A to B. This is not...

AI Tools & Products · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime