[2604.00529] MF-QAT: Multi-Format Quantization-Aware Training for Elastic Inference
About this article
Abstract page for arXiv paper 2604.00529: MF-QAT: Multi-Format Quantization-Aware Training for Elastic Inference
Computer Science > Machine Learning arXiv:2604.00529 (cs) [Submitted on 1 Apr 2026] Title:MF-QAT: Multi-Format Quantization-Aware Training for Elastic Inference Authors:Zifei Xu, Sayeh Sharify, Hesham Mostafa View a PDF of the paper titled MF-QAT: Multi-Format Quantization-Aware Training for Elastic Inference, by Zifei Xu and 2 other authors View PDF HTML (experimental) Abstract:Quantization-aware training (QAT) is typically performed for a single target numeric format, while practical deployments often need to choose numerical precision at inference time based on hardware support or runtime constraints. We study multi-format QAT, where a single model is trained to be robust across multiple quantization formats. We find that multi-format QAT can match single-format QAT at each target precision, yielding one model that performs well overall across different formats, even formats that were not seen during training. To enable practical deployment, we propose the Slice-and-Scale conversion procedure for both MXINT and MXFP that converts a high-precision representation into lower-precision formats without re-training. Building on this, we introduce a pipeline that (i) trains a model with multi-format QAT, (ii) stores a single anchor format checkpoint (MXINT8/MXFP8), and (iii) allows on-the-fly conversion to lower MXINT or MXFP formats at runtime with negligible-or no-additional accuracy degradation. Together, these components provide a practical path to elastic precision scalin...