[2604.07236] How Much LLM Does a Self-Revising Agent Actually Need?
About this article
Abstract page for arXiv paper 2604.07236: How Much LLM Does a Self-Revising Agent Actually Need?
Computer Science > Artificial Intelligence arXiv:2604.07236 (cs) [Submitted on 8 Apr 2026] Title:How Much LLM Does a Self-Revising Agent Actually Need? Authors:Seongwoo Jeong, Seonil Son View a PDF of the paper titled How Much LLM Does a Self-Revising Agent Actually Need?, by Seongwoo Jeong and 1 other authors View PDF HTML (experimental) Abstract:Recent LLM-based agents often place world modeling, planning, and reflection inside a single language model loop. This can produce capable behavior, but it makes a basic scientific question difficult to answer: which part of the agent's competence actually comes from the LLM, and which part comes from explicit structure around it? We study this question not by claiming a general answer, but by making it empirically tractable. We introduce a declared reflective runtime protocol that externalizes agent state, confidence signals, guarded actions, and hypothetical transitions into inspectable runtime structure. We instantiate this protocol in a declarative runtime and evaluate it on noisy Collaborative Battleship [4] using four progressively structured agents over 54 games (18 boards $\times$ 3 seeds). The resulting decomposition isolates four components: posterior belief tracking, explicit world-model planning, symbolic in-episode reflection, and sparse LLM-based revision. Across this decomposition, explicit world-model planning improves substantially over a greedy posterior-following baseline (+24.1pp win rate, +0.017 F1). Symbolic...