[2503.03088] AHCQ-SAM: Toward Accurate and Hardware-Compatible Post-Training Segment Anything Model Quantization
About this article
Abstract page for arXiv paper 2503.03088: AHCQ-SAM: Toward Accurate and Hardware-Compatible Post-Training Segment Anything Model Quantization
Computer Science > Computer Vision and Pattern Recognition arXiv:2503.03088 (cs) [Submitted on 5 Mar 2025 (v1), last revised 8 Apr 2026 (this version, v4)] Title:AHCQ-SAM: Toward Accurate and Hardware-Compatible Post-Training Segment Anything Model Quantization Authors:Wenlun Zhang, Yunshan Zhong, Weiqi Yan, Shengchuan Zhang, Shimpei Ando, Kentaro Yoshioka View a PDF of the paper titled AHCQ-SAM: Toward Accurate and Hardware-Compatible Post-Training Segment Anything Model Quantization, by Wenlun Zhang and Yunshan Zhong and Weiqi Yan and Shengchuan Zhang and Shimpei Ando and Kentaro Yoshioka View PDF HTML (experimental) Abstract:The Segment Anything Model (SAM) has revolutionized image and video segmentation with its powerful zero-shot capabilities. However, its massive parameter scale and high computational demands hinder efficient deployment on resource-constrained edge devices. While Post-Training Quantization (PTQ) offers a practical solution, existing methods still fail to handle four critical quantization challenges: (1) ill-conditioned weights; (2) skewed and long-tailed post-GELU activations; (3) pronounced inter-channel variance in linear projections; and (4) exponentially scaled and heterogeneous attention scores. To mitigate these bottlenecks, we propose AHCQ-SAM, an accurate and hardware-compatible PTQ framework featuring four synergistic components: (1) Activation-aware Condition Number Reduction (ACNR), which regularizes weight matrices via a proximal point al...