[2602.19241] Scaling Laws for Precision in High-Dimensional Linear Regression
Summary
This paper explores scaling laws for low-precision training in high-dimensional linear regression, analyzing the effects of quantization on model and data capacities.
Why It Matters
Understanding the theoretical implications of quantization in machine learning is crucial for optimizing model training under hardware constraints. This research provides a foundation for improving training protocols, which can enhance performance and efficiency in real-world applications.
Key Takeaways
- Low-precision training balances model quality and training costs.
- Quantization impacts effective model and data capacities differently.
- Multiplicative quantization preserves full-precision model size, while additive quantization reduces it.
- The study provides a theoretical basis for optimizing training under hardware constraints.
- Numerical experiments validate the theoretical findings.
Statistics > Machine Learning arXiv:2602.19241 (stat) [Submitted on 22 Feb 2026] Title:Scaling Laws for Precision in High-Dimensional Linear Regression Authors:Dechen Zhang, Xuan Tang, Yingyu Liang, Difan Zou View a PDF of the paper titled Scaling Laws for Precision in High-Dimensional Linear Regression, by Dechen Zhang and 3 other authors View PDF Abstract:Low-precision training is critical for optimizing the trade-off between model quality and training costs, necessitating the joint allocation of model size, dataset size, and numerical precision. While empirical scaling laws suggest that quantization impacts effective model and data capacities or acts as an additive error, the theoretical mechanisms governing these effects remain largely unexplored. In this work, we initiate a theoretical study of scaling laws for low-precision training within a high-dimensional sketched linear regression framework. By analyzing multiplicative (signal-dependent) and additive (signal-independent) quantization, we identify a critical dichotomy in their scaling behaviors. Our analysis reveals that while both schemes introduce an additive error and degrade the effective data size, they exhibit distinct effects on effective model size: multiplicative quantization maintains the full-precision model size, whereas additive quantization reduces the effective model size. Numerical experiments validate our theoretical findings. By rigorously characterizing the complex interplay among model scale, d...