[2602.23111] PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training
Summary
The paper presents PRAC, a novel method for compressing activations in large language models, achieving significant memory savings while maintaining performance.
Why It Matters
As large language models (LLMs) grow in size, memory efficiency becomes critical for training. PRAC addresses this challenge by effectively compressing activations, which are a major memory bottleneck, thus enabling more efficient training processes without sacrificing model performance.
Key Takeaways
- PRAC decomposes activations into principal and random subspaces for effective compression.
- The method achieves up to 36% memory reduction with minimal performance impact.
- PRAC provides an unbiased gradient estimator with low variance under specific conditions.
- The approach bridges the gap between fast convergence and subspace projection requirements.
- Extensive experiments validate PRAC's effectiveness in pre-training and fine-tuning tasks.
Computer Science > Machine Learning arXiv:2602.23111 (cs) [Submitted on 26 Feb 2026] Title:PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training Authors:Yanyi Li, Yimu Zhang, Cong Fang View a PDF of the paper titled PRAC: Principal-Random Subspace for LLM Activation Compression and Memory-Efficient Training, by Yanyi Li and 1 other authors View PDF HTML (experimental) Abstract:Activations have become the primary memory bottleneck in large-batch LLM training. However, existing compression methods fail to exploit the spectral structure of activations, resulting in slow convergence or limited compression. To address this, we bridge the relationship between the algorithm's fast convergence and the requirements for subspace projection, and show that an effective compression should yield an unbiased estimate of the original activation with low variance. We propose Principal-Random Subspace for LLM Activation Compression (PRAC), which novelly decomposes activations into two components: a principal subspace captured via SVD to retain dominant information, and a random subspace sampled from the orthogonal complement to approximate the tail. By introducing a precise scaling factor, we prove that PRAC yields an unbiased gradient estimator with minimum variance under certain conditions. Extensive experiments on pre-training and fine-tuning tasks demonstrate that PRAC achieves up to 36% total memory reduction with negligible performance degradatio...