[2601.21064] Textual Equilibrium Propagation for Deep Compound AI Systems
About this article
Abstract page for arXiv paper 2601.21064: Textual Equilibrium Propagation for Deep Compound AI Systems
Computer Science > Machine Learning arXiv:2601.21064 (cs) [Submitted on 28 Jan 2026 (v1), last revised 2 Apr 2026 (this version, v3)] Title:Textual Equilibrium Propagation for Deep Compound AI Systems Authors:Minghui Chen, Wenlong Deng, James Zou, Han Yu, Xiaoxiao Li View a PDF of the paper titled Textual Equilibrium Propagation for Deep Compound AI Systems, by Minghui Chen and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly deployed as part of compound AI systems that coordinate multiple modules (e.g., retrievers, tools, verifiers) over long-horizon workflows. Recent approaches that propagate textual feedback globally (e.g., TextGrad) make it feasible to optimize such pipelines, but we find that performance degrades as system depth grows. In particular, long-horizon agentic workflows exhibit two depth-scaling failure modes: 1) exploding textual gradient, where textual feedback grows exponentially with depth, leading to prohibitively long message and amplifies evaluation biases; and 2) vanishing textual gradient, where limited long-context ability causes models overemphasize partial feedback and compression of lengthy feedback causes downstream messages to lose specificity gradually as they propagate many hops upstream. To mitigate these issues, we introduce Textual Equilibrium Propagation (TEP), a local learning principle inspired by Equilibrium Propagation in energy-based models. TEP includes two phases: 1) a free phase...