[2511.09710] Echoing: Identity Failures when LLM Agents Talk to Each Other
About this article
Abstract page for arXiv paper 2511.09710: Echoing: Identity Failures when LLM Agents Talk to Each Other
Computer Science > Artificial Intelligence arXiv:2511.09710 (cs) [Submitted on 12 Nov 2025 (v1), last revised 3 Mar 2026 (this version, v3)] Title:Echoing: Identity Failures when LLM Agents Talk to Each Other Authors:Sarath Shekkizhar, Romain Cosentino, Adam Earle, Silvio Savarese View a PDF of the paper titled Echoing: Identity Failures when LLM Agents Talk to Each Other, by Sarath Shekkizhar and 3 other authors View PDF HTML (experimental) Abstract:As large language model (LLM) based agents interact autonomously with one another, a new class of failures emerges that cannot be predicted from single agent performance: behavioral drifts in agent-agent conversations (AxA). Unlike human-agent interactions, where humans ground and steer conversations, AxA lacks such stabilizing signals, making these failures unique. We investigate one such failure, echoing, where agents abandon their assigned roles and instead mirror their conversational partners, undermining their intended objectives. Through experiments across $66$ AxA configurations, $4$ domains (3 transactional, 1 advisory), and $2500+$ conversations (over $250000$ LLM inferences), we show that echoing occurs across major LLM providers, with echoing rates as high as $70\%$ depending on the model and domain. Moreover, we find that echoing is persistent even in advanced reasoning models with substantial rates ($32.8\%$) that are not reduced by reasoning efforts. We analyze prompt, conversation dynamics, showing that echoing ...