[2601.05280] On the Limits of Self-Improving in Large Language Models: The Singularity Is Not Near Without Symbolic Model Synthesis
Summary
This paper explores the limitations of self-improvement in large language models (LLMs), arguing that without symbolic model synthesis, the anticipated technological singularity is unlikely to be achieved.
Why It Matters
The findings challenge prevailing narratives about the autonomy of AI systems and highlight the need for external grounding in model training. This has implications for the future development of artificial general intelligence (AGI) and the understanding of machine learning dynamics.
Key Takeaways
- LLMs face degenerative dynamics without external grounding.
- Two failure modes identified: Entropy Decay and Variance Amplification.
- Mainstream AGI narratives may overlook the necessity of external signals.
- Neurosymbolic integration is proposed as a solution to these limitations.
- Autonomous systems risk collapse under current density matching objectives.
Computer Science > Information Theory arXiv:2601.05280 (cs) [Submitted on 5 Jan 2026 (v1), last revised 21 Feb 2026 (this version, v2)] Title:On the Limits of Self-Improving in Large Language Models: The Singularity Is Not Near Without Symbolic Model Synthesis Authors:Hector Zenil View a PDF of the paper titled On the Limits of Self-Improving in Large Language Models: The Singularity Is Not Near Without Symbolic Model Synthesis, by Hector Zenil View PDF HTML (experimental) Abstract:We formalise recursive self-training in Large Language Models (LLMs) and Generative AI as a discrete-time dynamical system. We prove that if the proportion of exogenous, externally grounded signal $\alpha_t$ vanishes asymptotically ($\alpha_t \to 0$), the system undergoes degenerative dynamics. We derive two fundamental failure modes: (1) \textit{Entropy Decay}, where finite sampling effects induce monotonic loss of distributional diversity, and (2) \textit{Variance Amplification}, where the absence of persistent grounding causes distributional drift via a random-walk mechanism. These behaviours are architectural invariants of distributional learning on finite samples. We show that the collapse results apply specifically to closed-loop density matching without persistent external signal. Systems with non-vanishing exogenous grounding fall outside this regime. However, mainstream Singularity, AGI, and ASI narratives typically posit systems that become increasingly autonomous and require little to...