[2604.03557] When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression
About this article
Abstract page for arXiv paper 2604.03557: When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression
Computer Science > Artificial Intelligence arXiv:2604.03557 (cs) [Submitted on 4 Apr 2026] Title:When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression Authors:Xinnan Dai, Kai Yang, Cheng Luo, Shenglai Zeng, Kai Guo, Jiliang Tang View a PDF of the paper titled When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression, by Xinnan Dai and 5 other authors View PDF HTML (experimental) Abstract:Reasoning hallucinations in large language models (LLMs) often appear as fluent yet unsupported conclusions that violate either the given context or underlying factual knowledge. Although such failures are widely observed, the mechanisms by which decoder-only Transformers produce them remain poorly understood. We model next-token prediction as a graph search process over an underlying graph, where entities correspond to nodes and learned transitions form edges. From this perspective, contextual reasoning is a constrained search over a sampled subgraph (intrinsic reasoning), while context-free queries rely on memorized structures in the underlying graph (extrinsic reasoning). We show that reasoning hallucinations arise from two fundamental mechanisms: \textbf{Path Reuse}, where memorized knowledge overrides contextual constraints during early training, and \textbf{Path Compression}, where frequently traversed multi-step paths collapse into shortcut edges in later training. Together, these mechanisms ...