[2603.00042] Maximizing the Spectral Energy Gain in Sub-1-Bit LLMs via Latent Geometry Alignment
About this article
Abstract page for arXiv paper 2603.00042: Maximizing the Spectral Energy Gain in Sub-1-Bit LLMs via Latent Geometry Alignment
Computer Science > Machine Learning arXiv:2603.00042 (cs) [Submitted on 9 Feb 2026] Title:Maximizing the Spectral Energy Gain in Sub-1-Bit LLMs via Latent Geometry Alignment Authors:Banseok Lee, Youngmin Kim View a PDF of the paper titled Maximizing the Spectral Energy Gain in Sub-1-Bit LLMs via Latent Geometry Alignment, by Banseok Lee and 1 other authors View PDF Abstract:We identify the Spectral Energy Gain in extreme model compression, where low-rank binary approximations outperform tiny-rank floating-point baselines for heavy-tailed spectra. However, prior attempts fail to realize this potential, trailing state-of-the-art 1-bit methods. We attribute this degradation to Latent Geometry Misalignment: standard singular vectors exhibit high coherence (spiky distribution), the worst-case geometry for binary quantization. To realize this gain, we propose LittleBit-2, a framework employing Internal Latent Rotation and Joint Iterative Quantization (Joint-ITQ). This approach acts as a geometric preconditioner, aligning coherent latent distributions with the binary hypercube with zero inference overhead. Empirically, LittleBit-2 establishes a new state-of-the-art in the sub-1-bit regime (1$\sim$0.1 bpp) on Llama-2 and Llama-3, matching the fidelity of leading 1-bit baselines. Subjects: Machine Learning (cs.LG); Artificial Intelligence (cs.AI) Cite as: arXiv:2603.00042 [cs.LG] (or arXiv:2603.00042v1 [cs.LG] for this version) https://doi.org/10.48550/arXiv.2603.00042 Focus to...