[2602.13320] Information Fidelity in Tool-Using LLM Agents: A Martingale Analysis of the Model Context Protocol
Summary
This article presents a theoretical framework for analyzing error propagation in tool-using LLM agents, proving linear growth of cumulative distortion and establishing actionable principles for reliable AI deployment.
Why It Matters
As AI agents increasingly rely on external tools for decision-making, understanding how errors accumulate is crucial for ensuring their reliability. This research provides a foundational framework that can enhance the trustworthiness of AI systems, making it relevant for developers and researchers in AI safety and deployment.
Key Takeaways
- Introduces a framework for analyzing error accumulation in LLM agents.
- Proves that cumulative distortion grows linearly, ensuring predictable behavior.
- Demonstrates that semantic weighting can reduce distortion by 80%.
- Suggests periodic re-grounding every 9 steps for effective error control.
- Provides actionable principles for deploying trustworthy AI agents.
Computer Science > Artificial Intelligence arXiv:2602.13320 (cs) [Submitted on 10 Feb 2026] Title:Information Fidelity in Tool-Using LLM Agents: A Martingale Analysis of the Model Context Protocol Authors:Flint Xiaofeng Fan, Cheston Tan, Roger Wattenhofer, Yew-Soon Ong View a PDF of the paper titled Information Fidelity in Tool-Using LLM Agents: A Martingale Analysis of the Model Context Protocol, by Flint Xiaofeng Fan and 3 other authors View PDF HTML (experimental) Abstract:As AI agents powered by large language models (LLMs) increasingly use external tools for high-stakes decisions, a critical reliability question arises: how do errors propagate across sequential tool calls? We introduce the first theoretical framework for analyzing error accumulation in Model Context Protocol (MCP) agents, proving that cumulative distortion exhibits linear growth and high-probability deviations bounded by $O(\sqrt{T})$. This concentration property ensures predictable system behavior and rules out exponential failure modes. We develop a hybrid distortion metric combining discrete fact matching with continuous semantic similarity, then establish martingale concentration bounds on error propagation through sequential tool interactions. Experiments across Qwen2-7B, Llama-3-8B, and Mistral-7B validate our theoretical predictions, showing empirical distortion tracks the linear trend with deviations consistently within $O(\sqrt{T})$ envelopes. Key findings include: semantic weighting reduces ...