[2605.07317] Amortized-Precision Quantization for Early-Exit Vision Transformers
About this article
Abstract page for arXiv paper 2605.07317: Amortized-Precision Quantization for Early-Exit Vision Transformers
Computer Science > Computer Vision and Pattern Recognition arXiv:2605.07317 (cs) [Submitted on 8 May 2026] Title:Amortized-Precision Quantization for Early-Exit Vision Transformers Authors:Rui Fang, Hsi-Wen Chen, Ming-Syan Chen View a PDF of the paper titled Amortized-Precision Quantization for Early-Exit Vision Transformers, by Rui Fang and 2 other authors View PDF HTML (experimental) Abstract:Vision Transformers (ViTs) achieve strong performance across vision tasks, yet their deployment with low-precision early exiting remains fragile. Existing quantization methods assume static full-depth execution, making them unstable when exit decisions are perturbed by quantization noise, which can amplify errors along dynamic inference paths. In this paper, we introduce Amortized-Precision Quantization (APQ), a utilization-aware formulation that accounts for layer-wise stochastic exposure to quantization noise and reveals depth-precision trade-offs. Building on APQ, we propose Mutual Adaptive Quantization with Early Exiting (MAQEE), a bi-level framework that jointly optimizes exit thresholds and bit-widths under explicit risk control to improve inference stability. MAQEE establishes a superior Pareto frontier in the accuracy-efficiency trade-off, reducing BOPs by up to 95% while maintaining accuracy and outperforming strong baselines by up to 20\% across classification, detection, and segmentation tasks. Subjects: Computer Vision and Pattern Recognition (cs.CV); Artificial Intellig...