[2604.04743] Hallucination Basins: A Dynamic Framework for Understanding and Controlling LLM Hallucinations
About this article
Abstract page for arXiv paper 2604.04743: Hallucination Basins: A Dynamic Framework for Understanding and Controlling LLM Hallucinations
Computer Science > Computation and Language arXiv:2604.04743 (cs) [Submitted on 6 Apr 2026] Title:Hallucination Basins: A Dynamic Framework for Understanding and Controlling LLM Hallucinations Authors:Kalyan Cherukuri, Lav R. Varshney View a PDF of the paper titled Hallucination Basins: A Dynamic Framework for Understanding and Controlling LLM Hallucinations, by Kalyan Cherukuri and Lav R. Varshney View PDF HTML (experimental) Abstract:Large language models (LLMs) hallucinate: they produce fluent outputs that are factually incorrect. We present a geometric dynamical systems framework in which hallucinations arise from task-dependent basin structure in latent space. Using autoregressive hidden-state trajectories across multiple open-source models and benchmarks, we find that separability is strongly task-dependent rather than universal: factoid settings can show clearer basin separation, whereas summarization and misconception-heavy settings are typically less stable and often overlap. We formalize this behavior with task-complexity and multi-basin theorems, characterize basin emergence in L-layer transformers, and show that geometry-aware steering can reduce hallucination probability without retraining. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI); Systems and Control (eess.SY) Cite as: arXiv:2604.04743 [cs.CL] (or arXiv:2604.04743v1 [cs.CL] for this version) https://doi.org/10.48550/arXiv.2604.04743 Focus to learn more arXiv-issued DOI v...