[2511.05898] Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization
Summary
The paper presents Q$^2$, a novel framework addressing gradient imbalance in low-bit quantization for complex visual tasks, enhancing performance in object detection and image segmentation.
Why It Matters
As low-bit quantization becomes increasingly important in deploying AI models efficiently, understanding and mitigating gradient imbalance can significantly improve the performance of complex tasks like object detection and segmentation, making this research highly relevant for practitioners in the field.
Key Takeaways
- Q$^2$ introduces a two-pronged approach to address gradient imbalance during low-bit quantization.
- The framework includes Quantization-aware Gradient Balancing Fusion and Attention Distribution Alignment for improved training stability.
- Experiments show an average improvement of +2.5% mAP in object detection and +3.7% mDICE in image segmentation.
- Q$^2$ is designed to be a plug-and-play solution, integrating easily into existing QAT pipelines.
- No additional inference-time overhead makes it practical for real-world applications.
Computer Science > Computer Vision and Pattern Recognition arXiv:2511.05898 (cs) [Submitted on 8 Nov 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization Authors:Zhaoyang Wang, Dong Wang View a PDF of the paper titled Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization, by Zhaoyang Wang and Dong Wang View PDF HTML (experimental) Abstract:Quantization-aware training (QAT) has achieved remarkable success in low-bit ($\leq$4-bit) quantization for classification networks. However, when applied to more complex visual tasks such as object detection and image segmentation, performance still suffers significant degradation. A key cause of this limitation has been largely overlooked in the literature. In this work, we revisit this phenomenon from a new perspective and identify a major failure factor: gradient imbalance at feature fusion stages, induced by accumulated quantization errors. This imbalance biases the optimization trajectory and impedes convergence under low-bit quantization. Based on this diagnosis, we propose Q$^2$, a two-pronged framework comprising: (1) Quantization-aware Gradient Balancing Fusion (Q-GBFusion), a closed-loop mechanism that dynamically rebalances gradient contributions during feature fusion; and (2) Quantization-aware Attention Distribution Alignment (Q-ADA), a parameter-free supervision strategy that recons...