[2603.23626] A Theory of LLM Information Susceptibility
About this article
Abstract page for arXiv paper 2603.23626: A Theory of LLM Information Susceptibility
Computer Science > Machine Learning arXiv:2603.23626 (cs) [Submitted on 24 Mar 2026] Title:A Theory of LLM Information Susceptibility Authors:Zhuo-Yang Song, Hua Xing Zhu View a PDF of the paper titled A Theory of LLM Information Susceptibility, by Zhuo-Yang Song and 1 other authors View PDF Abstract:Large language models (LLMs) are increasingly deployed as optimization modules in agentic systems, yet the fundamental limits of such LLM-mediated improvement remain poorly understood. Here we propose a theory of LLM information susceptibility, centred on the hypothesis that when computational resources are sufficiently large, the intervention of a fixed LLM does not increase the performance susceptibility of a strategy set with respect to budget. We develop a multi-variable utility-function framework that generalizes this hypothesis to architectures with multiple co-varying budget channels, and discuss the conditions under which co-scaling can exceed the susceptibility bound. We validate the theory empirically across structurally diverse domains and model scales spanning an order of magnitude, and show that nested, co-scaling architectures open response channels unavailable to fixed configurations. These results clarify when LLM intervention helps and when it does not, demonstrating that tools from statistical physics can provide predictive constraints for the design of AI systems. If the susceptibility hypothesis holds generally, the theory suggests that nested architectures...