[2508.02343] MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models

[2508.02343] MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2508.02343: MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models

Computer Science > Machine Learning arXiv:2508.02343 (cs) [Submitted on 4 Aug 2025 (v1), last revised 30 Mar 2026 (this version, v2)] Title:MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models Authors:Wenyuan Liu, Haoqian Meng, Yilun Luo, Yafei Zhao, Peng Zhang, Xindian Ma View a PDF of the paper titled MicroMix: Efficient Mixed-Precision Quantization with Microscaling Formats for Large Language Models, by Wenyuan Liu and 5 other authors View PDF Abstract:Quantization significantly accelerates inference in large language models (LLMs) by replacing original high-precision matrices with low-precision counterparts. Recent advances in weight-activation quantization have primarily focused on mapping both weights and activations to the INT4 format. Although the new FP4 Tensor Cores in NVIDIA's Blackwell architecture offer up to 4x speedup over FP16, existing INT4-based kernels fail to fully exploit this capability due to mismatched data formats. To bridge this gap, we propose MicroMix, a co-designed mixed-precision quantization algorithm and GEMM kernel based on Microscaling (MX) data formats. Tailored for the Blackwell architecture, the MicroMix kernel supports arbitrary combinations of MXFP4, MXFP6, and MXFP8 channels, and produces BFloat16 outputs. To achieve a favorable trade-off between accuracy and efficiency for each linear layer, we introduce quantization thresholds that identify activation elements where lower-precision fo...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

Llms

Anyone here using local models mainly to keep LLM costs under control?

Been noticing that once you use LLMs for real dev work, the cost conversation gets messy fast. It is not just raw API spend. It is retrie...

Reddit - Artificial Intelligence · 1 min ·
Claude AI Goes Down for Thousands of Users Wednesday, Downdetector Reports
Llms

Claude AI Goes Down for Thousands of Users Wednesday, Downdetector Reports

Claude AI faces an outage today as over 7,000 users report issues. Stay informed about the situation here.

AI Tools & Products · 6 min ·
Llms

ChatGPT meets coffee: Starbucks launches AI ordering tool

Starbucks has launched an AI ordering tool that integrates with ChatGPT, aiming to improve the customer experience by streamlining the or...

AI Tools & Products · 1 min ·
NFL mock draft 2026: ChatGPT AI gives the worst predictions you'll ever see
Llms

NFL mock draft 2026: ChatGPT AI gives the worst predictions you'll ever see

USA TODAY Sports features a mock draft for the 2026 NFL Draft created by ChatGPT AI, which is noted for being the worst mock draft ever p...

AI Tools & Products · 9 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime