[2604.04020] Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models
About this article
Abstract page for arXiv paper 2604.04020: Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models
Computer Science > Computation and Language arXiv:2604.04020 (cs) [Submitted on 5 Apr 2026] Title:Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models Authors:Sailesh kiran kurra, Shiek Ruksana, Vishal Borusu View a PDF of the paper titled Unmasking Hallucinations: A Causal Graph-Attention Perspective on Factual Reliability in Large Language Models, by Sailesh kiran kurra and 2 other authors View PDF Abstract:This paper primarily focuses on the hallucinations caused due to AI language models(LLMs).LLMs have shown extraordinary Language understanding and generation capabilities .Still it has major a disadvantage hallucinations which give outputs which are factually incorrect ,misleading or unsupported by input data . These hallucinations cause serious problems in scenarios like medical diagnosis or legal this http URL this work,we propose causal graph attention network (GCAN) framework that reduces hallucinations through interpretation of internal attention flow within a transformer architecture with the help of constructing token level graphs that combine self attention weights and gradient based influence this http URL method quantifies each tokens factual dependency using a new metric called the Causal Contribution Score (CCS). We further introduce a fact-anchored graph reweighting layer that dynamically reduces the influence of hallucination prone nodes during generation. Experiments on standard benchmarks such a...