[2510.19496] CARES: Context-Aware Resolution Selector for VLMs
About this article
Abstract page for arXiv paper 2510.19496: CARES: Context-Aware Resolution Selector for VLMs
Computer Science > Computer Vision and Pattern Recognition arXiv:2510.19496 (cs) [Submitted on 22 Oct 2025 (v1), last revised 20 Mar 2026 (this version, v2)] Title:CARES: Context-Aware Resolution Selector for VLMs Authors:Moshe Kimhi, Nimrod Shabtay, Raja Giryes, Chaim Baskin, Eli Schwartz View a PDF of the paper titled CARES: Context-Aware Resolution Selector for VLMs, by Moshe Kimhi and 4 other authors View PDF HTML (experimental) Abstract:Large vision-language models (VLMs) commonly process images at native or high resolution to remain effective across tasks. This inflates visual tokens ofter to 97-99% of total tokens, resulting in high compute and latency, even when low-resolution images would suffice. We introduce \emph{CARES}-a \textbf{C}ontext-\textbf{A}ware \textbf{R}esolution \textbf{S}elector, a lightweight preprocessing module that, given an image-query pair, predicts the \emph{minimal} sufficient input resolution. CARES uses a compact VLM (350M) to extract features and predict when a target pretrained VLM's response converges to its peak ability to answer correctly. Though trained as a discrete classifier over a set of optional resolutions, CARES interpolates continuous resolutions at inference for fine-grained control. Across five multimodal benchmarks spanning documents and natural images, as well as diverse target VLMs, CARES preserves task performance while reducing compute by up to 80%. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial ...