[2604.04020] Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models

[2604.04020] Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2604.04020: Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models

Computer Science > Computation and Language arXiv:2604.04020 (cs) [Submitted on 5 Apr 2026] Title:Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models Authors:Sailesh kiran kurra, Shiek Ruksana, Vishal Borusu View a PDF of the paper titled Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models, by Sailesh kiran kurra and 2 other authors View PDF Abstract:This paper primarily focuses on the hallucinations caused due to AI language models(LLMs).LLMs have shown extraordinary Language understanding and generation capabilities .Still it has major a disadvantage hallucinations which give outputs which are factually incorrect ,misleading or unsupported by input data . These hallucinations cause serious problems in scenarios like medical diagnosis or legal this http URL this work,we propose causal graph attention network (GCAN) framework that reduces hallucinations through interpretation of internal attention flow within a transformer architecture with the help of constructing token level graphs that combine self attention weights and gradient based influence this http URL method quantifies each tokens factual dependency using a new metric called the Causal Contribution Score (CCS). We further introduce a fact-anchored graph reweighting layer that dynamically reduces the influence of hallucination prone nodes during generation. Experiments on standard benchmarks such a...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs
Llms

[2603.07475] A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

Abstract page for arXiv paper 2603.07475: A Comparative analysis of Layer-wise Representational Capacity in AR and Diffusion LLMs

arXiv - Machine Learning · 3 min ·
[2601.22925] BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models
Llms

[2601.22925] BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models

Abstract page for arXiv paper 2601.22925: BEAR: Towards Beam-Search-Aware Optimization for Recommendation with Large Language Models

arXiv - Machine Learning · 4 min ·
[2512.10551] LLM-Auction: Generative Auction towards LLM-Native Advertising
Llms

[2512.10551] LLM-Auction: Generative Auction towards LLM-Native Advertising

Abstract page for arXiv paper 2512.10551: LLM-Auction: Generative Auction towards LLM-Native Advertising

arXiv - Machine Learning · 3 min ·
[2511.17411] SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding
Llms

[2511.17411] SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding

Abstract page for arXiv paper 2511.17411: SPEAR-1: Scaling Beyond Robot Demonstrations via 3D Understanding

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime