[2509.19189] Functional Scaling Laws in Kernel Regression: Loss Dynamics and Learning Rate Schedules

[2509.19189] Functional Scaling Laws in Kernel Regression: Loss Dynamics and Learning Rate Schedules

arXiv - Machine Learning 4 min read Article

Summary

This article explores Functional Scaling Laws in kernel regression, focusing on loss dynamics and the impact of learning rate schedules, providing insights into training efficiency for large language models.

Why It Matters

Understanding loss dynamics and learning rate schedules is crucial for improving the training efficiency of machine learning models, particularly large language models. This research offers a theoretical framework that can guide practitioners in optimizing their training processes, potentially leading to better model performance and resource utilization.

Key Takeaways

  • Introduces a Functional Scaling Law (FSL) that captures full loss trajectories under various learning rate schedules.
  • Higher-capacity models demonstrate improved data and compute efficiency.
  • Learning rate decay enhances training efficiency, while warmup-stable-decay schedules outperform pure decay.
  • The study provides empirical evidence supporting the theoretical framework through experiments on large language models.
  • Offers insights that can help optimize training processes in machine learning.

Computer Science > Machine Learning arXiv:2509.19189 (cs) [Submitted on 23 Sep 2025 (v1), last revised 15 Feb 2026 (this version, v4)] Title:Functional Scaling Laws in Kernel Regression: Loss Dynamics and Learning Rate Schedules Authors:Binghui Li, Fengling Chen, Zixun Huang, Lean Wang, Lei Wu View a PDF of the paper titled Functional Scaling Laws in Kernel Regression: Loss Dynamics and Learning Rate Schedules, by Binghui Li and 4 other authors View PDF HTML (experimental) Abstract:Scaling laws have emerged as a unifying lens for understanding and guiding the training of large language models (LLMs). However, existing studies predominantly focus on the final-step loss, leaving open whether the entire loss dynamics obey similar laws and, crucially, how the learning rate schedule (LRS) shapes them. We address these gaps in a controlled theoretical setting by analyzing stochastic gradient descent (SGD) on a power-law kernel regression model. The key insight is a novel intrinsic-time viewpoint, which captures the training progress more faithfully than iteration count. We then establish a Functional Scaling Law (FSL) that captures the full loss trajectory under arbitrary LRSs, with the schedule's influence entering through a simple convolutional functional. We further instantiate the theory for three representative LRSs -- constant, exponential decay, and warmup-stable-decay (WSD) -- and derive explicit scaling relations in both data- and compute-limited regimes. These comparis...

Related Articles

Google’s Gemini AI can answer your questions with 3D models and simulations
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations

Google's latest upgrade for Gemini will allow the chatbot to generate interactive 3D models and simulations in response to your questions...

The Verge - AI · 4 min ·
Moody’s Integrates AI Agents With Anthropic’s Claude
Llms

Moody’s Integrates AI Agents With Anthropic’s Claude

AI Tools & Products · 4 min ·
AI on the couch: Anthropic gives Claude 20 hours of psychiatry
Llms

AI on the couch: Anthropic gives Claude 20 hours of psychiatry

AI Tools & Products · 6 min ·
These AI Glasses Switch Between ChatGPT and Gemini. Why Don't More Wearables Do This?
Llms

These AI Glasses Switch Between ChatGPT and Gemini. Why Don't More Wearables Do This?

AI Tools & Products · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime