[2604.03557] When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

[2604.03557] When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.03557: When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression

Computer Science > Artificial Intelligence arXiv:2604.03557 (cs) [Submitted on 4 Apr 2026] Title:When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression Authors:Xinnan Dai, Kai Yang, Cheng Luo, Shenglai Zeng, Kai Guo, Jiliang Tang View a PDF of the paper titled When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression, by Xinnan Dai and 5 other authors View PDF HTML (experimental) Abstract:Reasoning hallucinations in large language models (LLMs) often appear as fluent yet unsupported conclusions that violate either the given context or underlying factual knowledge. Although such failures are widely observed, the mechanisms by which decoder-only Transformers produce them remain poorly understood. We model next-token prediction as a graph search process over an underlying graph, where entities correspond to nodes and learned transitions form edges. From this perspective, contextual reasoning is a constrained search over a sampled subgraph (intrinsic reasoning), while context-free queries rely on memorized structures in the underlying graph (extrinsic reasoning). We show that reasoning hallucinations arise from two fundamental mechanisms: \textbf{Path Reuse}, where memorized knowledge overrides contextual constraints during early training, and \textbf{Path Compression}, where frequently traversed multi-step paths collapse into shortcut edges in later training. Together, these mechanisms ...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Confusing Website

i'm trying to find a video online and couldn't so i asked ChatGPT by describing the video and i was given a link and i'm trying to make s...

Reddit - Artificial Intelligence · 1 min ·
Llms

I tested the same prompt across multiple AI models… the differences surprised me

I’ve been experimenting with different AI models lately (ChatGPT, Claude, etc.), and I tried something simple: Using the exact same promp...

Reddit - Artificial Intelligence · 1 min ·
Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying
Llms

Anthropic gave Claude $100 to go shopping, here’s what the AI ended up buying

Anthropic’s AI experiment showed Claude independently handled 186 deals worth over $4,000, but results varied by model capability, with u...

AI Tools & Products · 5 min ·
CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models
Llms

CoreWeave (CRWV) Partners with Anthropic to Provide Infrastructure for Claude AI Models

CoreWeave Inc. (NASDAQ:CRWV) is one of the best technology stocks to buy for the next decade. On April 20, CoreWeave announced a multi-ye...

AI Tools & Products · 2 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime