[2603.00171] AdaFocus: Knowing When and Where to Look for Adaptive Visual Reasoning
About this article
Abstract page for arXiv paper 2603.00171: AdaFocus: Knowing When and Where to Look for Adaptive Visual Reasoning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.00171 (cs) [Submitted on 26 Feb 2026] Title:AdaFocus: Knowing When and Where to Look for Adaptive Visual Reasoning Authors:Yuxiang Shen, Hailong Huang, Zhenkun Gao, Xueheng Li, Chengjun Xie, Xuanhua He, Jie Zhang View a PDF of the paper titled AdaFocus: Knowing When and Where to Look for Adaptive Visual Reasoning, by Yuxiang Shen and 6 other authors View PDF HTML (experimental) Abstract:Multimodal Large Language Models (MLLMs) are shifting towards "Thinking with Images" by actively exploring image details. While effective, large-scale training is computationally expensive, which has spurred growing interest in lightweight, training-free solutions. However, existing training-free methods suffer from two flaws: perceptual redundancy from indiscriminate cropping, which adds overhead and noise; and a drift between semantic intent and spatial attention, which prevents accurate localization of user-focused regions. To address these challenges, we propose AdaFocus, a novel training-free framework designed for adaptive visual reasoning. AdaFocus follows a two-stage pipeline: a confidence-based module decides when to crop, and a semantic-guided localization module determines where to crop. This enables adaptive visual reasoning without additional training. Experimentally, AdaFocus delivers substantial performance gains while achieving approximately 4.0\times speedup inference speedup than the SOTA method ZoomEyes...