[2511.19820] CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception
About this article
Abstract page for arXiv paper 2511.19820: CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.19820 (cs) [Submitted on 25 Nov 2025 (v1), last revised 13 Apr 2026 (this version, v2)] Title:CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception Authors:Miguel Carvalho, Helder Dias, Bruno Martins View a PDF of the paper titled CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception, by Miguel Carvalho and 2 other authors View PDF HTML (experimental) Abstract:Vision-Language Models (VLMs) often struggle with tasks that require fine-grained image understanding, such as scene-text recognition or document analysis, due to perception limitations and visual fragmentation. To address these challenges, we introduce CropVLM as an external low-cost method for boosting performance, enabling VLMs to dynamically ''zoom in'' on relevant image regions, enhancing their ability to capture fine details. CropVLM is trained using reinforcement learning, without using human-labeled bounding boxes as a supervision signal, and without expensive synthetic evaluations. The model is trained once and can be paired with both open-source and proprietary VLMs to improve their performance. Our approach delivers significant improvements on tasks that require high-resolution image understanding, notably for benchmarks that are out-of-domain for the target VLM, without modifying or fine-tuning the VLM, thus avoiding catastrophic forgetting. Comments: Subjects: Computer Vision and Pattern Recognition (cs....