[2510.06292] ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations
About this article
Abstract page for arXiv paper 2510.06292: ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations
Computer Science > Computer Vision and Pattern Recognition arXiv:2510.06292 (cs) [Submitted on 7 Oct 2025 (v1), last revised 1 Mar 2026 (this version, v2)] Title:ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations Authors:Yike Wu, Yiwei Wang, Yujun Cai View a PDF of the paper titled ChainMPQ: Interleaved Text-Image Reasoning Chains for Mitigating Relation Hallucinations, by Yike Wu and 2 other authors View PDF HTML (experimental) Abstract:While Large Vision-Language Models (LVLMs) achieve strong performance in multimodal tasks, hallucinations continue to hinder their reliability. Among the three categories of hallucinations, which include object, attribute, and relation, relation hallucinations account for the largest proportion but have received the least attention. To address this issue, we propose ChainMPQ (Multi-Perspective Questions guided Interleaved Chain of Image and Text), a training-free method that improves relational inference in LVLMs by utilizing accumulated textual and visual memories. ChainMPQ first extracts subject and object keywords from the question to enhance the corresponding image regions. It then constructs multi-perspective questions that focus on the three core components of a relationship: the subject, the object, and the relation that links them. These questions are sequentially input to the model, with textual and visual memories from earlier steps providing supporting context for subsequent ones, thereby fo...