[2602.19241] Scaling Laws for Precision in High-Dimensional Linear Regression

[2602.19241] Scaling Laws for Precision in High-Dimensional Linear Regression

arXiv - Machine Learning 3 min read Article

Summary

This paper explores scaling laws for low-precision training in high-dimensional linear regression, analyzing the effects of quantization on model and data capacities.

Why It Matters

Understanding the theoretical implications of quantization in machine learning is crucial for optimizing model training under hardware constraints. This research provides a foundation for improving training protocols, which can enhance performance and efficiency in real-world applications.

Key Takeaways

  • Low-precision training balances model quality and training costs.
  • Quantization impacts effective model and data capacities differently.
  • Multiplicative quantization preserves full-precision model size, while additive quantization reduces it.
  • The study provides a theoretical basis for optimizing training under hardware constraints.
  • Numerical experiments validate the theoretical findings.

Statistics > Machine Learning arXiv:2602.19241 (stat) [Submitted on 22 Feb 2026] Title:Scaling Laws for Precision in High-Dimensional Linear Regression Authors:Dechen Zhang, Xuan Tang, Yingyu Liang, Difan Zou View a PDF of the paper titled Scaling Laws for Precision in High-Dimensional Linear Regression, by Dechen Zhang and 3 other authors View PDF Abstract:Low-precision training is critical for optimizing the trade-off between model quality and training costs, necessitating the joint allocation of model size, dataset size, and numerical precision. While empirical scaling laws suggest that quantization impacts effective model and data capacities or acts as an additive error, the theoretical mechanisms governing these effects remain largely unexplored. In this work, we initiate a theoretical study of scaling laws for low-precision training within a high-dimensional sketched linear regression framework. By analyzing multiplicative (signal-dependent) and additive (signal-independent) quantization, we identify a critical dichotomy in their scaling behaviors. Our analysis reveals that while both schemes introduce an additive error and degrade the effective data size, they exhibit distinct effects on effective model size: multiplicative quantization maintains the full-precision model size, whereas additive quantization reduces the effective model size. Numerical experiments validate our theoretical findings. By rigorously characterizing the complex interplay among model scale, d...

Related Articles

Machine Learning

[P] Fused MoE Dispatch in Pure Triton: Beating CUDA-Optimized Megablocks at Inference Batch Sizes

I built a fused MoE dispatch kernel in pure Triton that handles the full forward pass for Mixture-of-Experts models. No CUDA, no vendor-s...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ICML Rebuttal Question

I am currently working on my response on the rebuttal acknowledgments for ICML and I doubting how to handle the strawman argument of that...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] ML researcher looking to switch to a product company.

Hey, I am an AI researcher currently working in a deep tech company as a data scientist. Prior to this, I was doing my PhD. My current ro...

Reddit - Machine Learning · 1 min ·
Machine Learning

Building behavioural response models of public figures using Brain scan data (Predict their next move using psychological modelling) [P]

Hey guys, I’m the same creator of Netryx V2, the geolocation tool. I’ve been working on something new called COGNEX. It learns how a pers...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime