[2602.23192] FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification

[2602.23192] FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification

arXiv - Machine Learning 3 min read Article

Summary

The paper presents FairQuant, a framework for fairness-aware mixed-precision quantization in medical image classification, optimizing both performance and fairness metrics.

Why It Matters

As AI systems are increasingly used in healthcare, ensuring fairness in algorithmic decisions is critical. FairQuant addresses the challenge of maintaining model efficiency while promoting equitable outcomes across different demographic groups, which is essential for ethical AI deployment in medical settings.

Key Takeaways

  • FairQuant optimizes mixed-precision quantization while considering fairness.
  • The framework uses group-aware importance analysis and budgeted allocation.
  • Results indicate improved performance for underrepresented groups in medical imaging.
  • FairQuant maintains accuracy comparable to standard quantization methods.
  • The approach is evaluated on established datasets, demonstrating its effectiveness.

Computer Science > Computer Vision and Pattern Recognition arXiv:2602.23192 (cs) [Submitted on 26 Feb 2026] Title:FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification Authors:Thomas Woergaard, Raghavendra Selvan View a PDF of the paper titled FairQuant: Fairness-Aware Mixed-Precision Quantization for Medical Image Classification, by Thomas Woergaard and 1 other authors View PDF HTML (experimental) Abstract:Compressing neural networks by quantizing model parameters offers useful trade-off between performance and efficiency. Methods like quantization-aware training and post-training quantization strive to maintain the downstream performance of compressed models compared to the full precision models. However, these techniques do not explicitly consider the impact on algorithmic fairness. In this work, we study fairness-aware mixed-precision quantization schemes for medical image classification under explicit bit budgets. We introduce FairQuant, a framework that combines group-aware importance analysis, budgeted mixed-precision allocation, and a learnable Bit-Aware Quantization (BAQ) mode that jointly optimizes weights and per-unit bit allocations under bitrate and fairness regularization. We evaluate the method on Fitzpatrick17k and ISIC2019 across ResNet18/50, DeiT-Tiny, and TinyViT. Results show that FairQuant configurations with average precision near 4-6 bits recover much of the Uniform 8-bit accuracy while improving worst-group performa...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Machine Learning

[D] Looking for definition of open-world ish learning problem

Hello! Recently I did a project where I initially had around 30 target classes. But at inference, the model had to be able to handle a lo...

Reddit - Machine Learning · 1 min ·
Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?
Machine Learning

Mystery Shopping Meets Machine Learning: Can Algorithms Become the Ultimate Customer Experience Auditor?

Customer expectations across Africa are shifting faster than most organisations can track. A single inconsistent interaction can ignite a...

AI News - General · 8 min ·
Machine Learning

GitHub to Use User Data for AI Training by Default

submitted by /u/i-drake [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime