[2603.01776] FreeAct: Freeing Activations for LLM Quantization
About this article
Abstract page for arXiv paper 2603.01776: FreeAct: Freeing Activations for LLM Quantization
Computer Science > Computation and Language arXiv:2603.01776 (cs) [Submitted on 2 Mar 2026] Title:FreeAct: Freeing Activations for LLM Quantization Authors:Xiaohao Liu, Xiaobo Xia, Manyi Zhang, Ji-Fu Li, Xianzhi Yu, Fei Shen, Xiu Su, See-Kiong Ng, Tat-Seng Chua View a PDF of the paper titled FreeAct: Freeing Activations for LLM Quantization, by Xiaohao Liu and 8 other authors View PDF HTML (experimental) Abstract:Quantization is pivotal for mitigating the significant memory and computational overhead of Large Language Models (LLMs). While emerging transformation-based methods have successfully enhanced quantization by projecting feature spaces onto smoother manifolds using orthogonal matrices, they typically enforce a rigid one-to-one transformation constraint. This static approach fails to account for the dynamic patterns inherent in input activations, particularly within diffusion LLMs (dLLMs) and Multimodal LLMs (MLLMs), where varying token types exhibit distinct distributions. To advance this, we propose FreeAct, a novel quantization framework that relaxes the static one-to-one constraint to accommodate dynamic activation disparities. Theoretically, we leverage the rank-deficient nature of activations to derive a solution space that extends beyond simple inverse matrices, enabling the decoupling of activation transformations from weights. Methodologically, FreeAct identifies token-specific dynamics (i.e., vision v.s. text, or masked tokens) and allocates distinct trans...