[2603.22815] Focus, Don't Prune: Identifying Instruction-Relevant Regions for Information-Rich Image Understanding
About this article
Abstract page for arXiv paper 2603.22815: Focus, Don't Prune: Identifying Instruction-Relevant Regions for Information-Rich Image Understanding
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.22815 (cs) [Submitted on 24 Mar 2026] Title:Focus, Don't Prune: Identifying Instruction-Relevant Regions for Information-Rich Image Understanding Authors:Mincheol Kwon, Minseung Lee, Seonga Choi, Miso Choi, Kyeong-Jin Oh, Hyunyoung Lee, Cheonyoung Park, Yongho Song, Seunghyun Park, Jinkyu Kim View a PDF of the paper titled Focus, Don't Prune: Identifying Instruction-Relevant Regions for Information-Rich Image Understanding, by Mincheol Kwon and 9 other authors View PDF HTML (experimental) Abstract:Large Vision-Language Models (LVLMs) have shown strong performance across various multimodal tasks by leveraging the reasoning capabilities of Large Language Models (LLMs). However, processing visually complex and information-rich images, such as infographics or document layouts, requires these models to generate a large number of visual tokens, leading to significant computational overhead. To address this, we propose PinPoint, a novel two-stage framework that first identifies instruction-relevant image regions and then refines them to extract fine-grained visual features for improved reasoning and efficiency. Central to our approach is the Instruction-Region Alignment, which localizes relevant regions using both visual input and textual instructions. We further introduce new annotations that provide richer ground-truth supervision for instruction-relevant regions across challenging VQA benchmarks: Infographic...