[2602.14208] Fast Catch-Up, Late Switching: Optimal Batch Size Scheduling via Functional Scaling Laws

[2602.14208] Fast Catch-Up, Late Switching: Optimal Batch Size Scheduling via Functional Scaling Laws

arXiv - Machine Learning 4 min read Article

Summary

This paper explores optimal batch size scheduling in deep learning, revealing that task difficulty influences the effectiveness of batch size adjustments throughout training.

Why It Matters

Understanding batch size scheduling is crucial for optimizing deep learning models, particularly as it affects both computational efficiency and training dynamics. This research provides a theoretical framework that can enhance model performance while reducing data consumption, which is vital for practitioners in the field.

Key Takeaways

  • Optimal batch size scheduling varies with task difficulty; easy tasks benefit from increasing batch sizes, while hard tasks require a late switch to larger batches.
  • The 'fast catch-up effect' allows for effective late switching without performance loss, leveraging rapid alignment of loss trajectories.
  • Extensive experiments validate the theoretical predictions, showing late-switch schedules outperform constant-batch and early-switch methods.

Computer Science > Machine Learning arXiv:2602.14208 (cs) [Submitted on 15 Feb 2026] Title:Fast Catch-Up, Late Switching: Optimal Batch Size Scheduling via Functional Scaling Laws Authors:Jinbo Wang, Binghui Li, Zhanpeng Zhou, Mingze Wang, Yuxuan Sun, Jiaqi Zhang, Xunliang Cai, Lei Wu View a PDF of the paper titled Fast Catch-Up, Late Switching: Optimal Batch Size Scheduling via Functional Scaling Laws, by Jinbo Wang and 7 other authors View PDF HTML (experimental) Abstract:Batch size scheduling (BSS) plays a critical role in large-scale deep learning training, influencing both optimization dynamics and computational efficiency. Yet, its theoretical foundations remain poorly understood. In this work, we show that the functional scaling law (FSL) framework introduced in Li et al. (2025a) provides a principled lens for analyzing BSS. Specifically, we characterize the optimal BSS under a fixed data budget and show that its structure depends sharply on task difficulty. For easy tasks, optimal schedules keep increasing batch size throughout. In contrast, for hard tasks, the optimal schedule maintains small batch sizes for most of training and switches to large batches only in a late stage. To explain the emergence of late switching, we uncover a dynamical mechanism -- the fast catch-up effect -- which also manifests in large language model (LLM) pretraining. After switching from small to large batches, the loss rapidly aligns with the constant large-batch trajectory. Using FSL,...

Related Articles

Machine Learning

Looking to join a team working on AI/CV research (aiming to publish) [R]

Hi, I am currently working as a research assistant in my college, but I want to do more serious research and learn more from it. I’m inte...

Reddit - Machine Learning · 1 min ·
Machine Learning

Fed Chair Jerome Powell, Treasury's Bessent and top bank CEOs met over Anthropic's Mythos model

submitted by /u/esporx [link] [comments]

Reddit - Artificial Intelligence · 1 min ·
Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED
Machine Learning

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think | WIRED

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who hav...

Wired - AI · 9 min ·
Machine Learning

Is google deepmind known to ghost applicants? [D]

Hey sub, I'm sorry if this is a wrong place to ask but I don't see a sub for ML roles separately. I was wondering if deepmind is known to...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime