[2602.12429] Stabilizing Native Low-Rank LLM Pretraining
Summary
This paper presents a method for stabilizing the training of low-rank large language models (LLMs), addressing computational challenges while maintaining performance.
Why It Matters
As LLMs grow in size, their training becomes increasingly resource-intensive. This research provides a viable solution for training low-rank models, which can significantly reduce costs and improve efficiency, making advanced AI more accessible.
Key Takeaways
- Low-rank factorization can reduce training and inference costs for LLMs.
- The proposed Spectron method stabilizes low-rank training by controlling spectral norm growth.
- Compute-optimal scaling laws for low-rank transformers are established, improving efficiency.
Computer Science > Machine Learning arXiv:2602.12429 (cs) [Submitted on 12 Feb 2026] Title:Stabilizing Native Low-Rank LLM Pretraining Authors:Paul Janson, Edouard Oyallon, Eugene Belilovsky View a PDF of the paper titled Stabilizing Native Low-Rank LLM Pretraining, by Paul Janson and 2 other authors View PDF Abstract:Foundation models have achieved remarkable success, yet their growing parameter counts pose significant computational and memory challenges. Low-rank factorization offers a promising route to reduce training and inference costs, but the community lacks a stable recipe for training models from scratch using exclusively low-rank weights while matching the performance of the dense model. We demonstrate that Large Language Models (LLMs) can be trained from scratch using exclusively low-rank factorized weights for all non-embedding matrices without auxiliary "full-rank" guidance required by prior methods. While native low-rank training often suffers from instability and loss spikes, we identify uncontrolled growth in the spectral norm (largest singular value) of the weight matrix update as the dominant factor. To address this, we introduce Spectron: Spectral renormalization with orthogonalization, which dynamically bounds the resultant weight updates based on the current spectral norms of the factors. Our method enables stable, end-to-end factorized training with negligible overhead. Finally, we establish compute-optimal scaling laws for natively low-rank transfor...