[2603.28610] ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning
About this article
Abstract page for arXiv paper 2603.28610: ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning
Computer Science > Computer Vision and Pattern Recognition arXiv:2603.28610 (cs) [Submitted on 30 Mar 2026] Title:ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning Authors:Huanxuan Liao, Zhongtao Jiang, Yupu Hao, Yuqiao Tan, Shizhu He, Jun Zhao, Kun Xu, Kang Liu View a PDF of the paper titled ResAdapt: Adaptive Resolution for Efficient Multimodal Reasoning, by Huanxuan Liao and 7 other authors View PDF Abstract:Multimodal Large Language Models (MLLMs) achieve stronger visual understanding by scaling input fidelity, yet the resulting visual token growth makes jointly sustaining high spatial resolution and long temporal context prohibitive. We argue that the bottleneck lies not in how post-encoding representations are compressed but in the volume of pixels the encoder receives, and address it with ResAdapt, an Input-side adaptation framework that learns how much visual budget each frame should receive before encoding. ResAdapt couples a lightweight Allocator with an unchanged MLLM backbone, so the backbone retains its native visual-token interface while receiving an operator-transformed input. We formulate allocation as a contextual bandit and train the Allocator with Cost-Aware Policy Optimization (CAPO), which converts sparse rollout feedback into a stable accuracy-cost learning signal. Across budget-controlled video QA, temporal grounding, and image reasoning tasks, ResAdapt improves low-budget operating points and often lies on or near the efficiency-accur...