[2511.05898] Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization

[2511.05898] Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization

arXiv - AI 4 min read Article

Summary

The paper presents Q$^2$, a novel framework addressing gradient imbalance in low-bit quantization for complex visual tasks, enhancing performance in object detection and image segmentation.

Why It Matters

As low-bit quantization becomes increasingly important in deploying AI models efficiently, understanding and mitigating gradient imbalance can significantly improve the performance of complex tasks like object detection and segmentation, making this research highly relevant for practitioners in the field.

Key Takeaways

  • Q$^2$ introduces a two-pronged approach to address gradient imbalance during low-bit quantization.
  • The framework includes Quantization-aware Gradient Balancing Fusion and Attention Distribution Alignment for improved training stability.
  • Experiments show an average improvement of +2.5% mAP in object detection and +3.7% mDICE in image segmentation.
  • Q$^2$ is designed to be a plug-and-play solution, integrating easily into existing QAT pipelines.
  • No additional inference-time overhead makes it practical for real-world applications.

Computer Science > Computer Vision and Pattern Recognition arXiv:2511.05898 (cs) [Submitted on 8 Nov 2025 (v1), last revised 26 Feb 2026 (this version, v2)] Title:Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization Authors:Zhaoyang Wang, Dong Wang View a PDF of the paper titled Q$^2$: Quantization-Aware Gradient Balancing and Attention Alignment for Low-Bit Quantization, by Zhaoyang Wang and Dong Wang View PDF HTML (experimental) Abstract:Quantization-aware training (QAT) has achieved remarkable success in low-bit ($\leq$4-bit) quantization for classification networks. However, when applied to more complex visual tasks such as object detection and image segmentation, performance still suffers significant degradation. A key cause of this limitation has been largely overlooked in the literature. In this work, we revisit this phenomenon from a new perspective and identify a major failure factor: gradient imbalance at feature fusion stages, induced by accumulated quantization errors. This imbalance biases the optimization trajectory and impedes convergence under low-bit quantization. Based on this diagnosis, we propose Q$^2$, a two-pronged framework comprising: (1) Quantization-aware Gradient Balancing Fusion (Q-GBFusion), a closed-loop mechanism that dynamically rebalances gradient contributions during feature fusion; and (2) Quantization-aware Attention Distribution Alignment (Q-ADA), a parameter-free supervision strategy that recons...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
Machine Learning

GitHub to Use User Data for AI Training by Default

submitted by /u/i-drake [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime