[2604.00260] Learning to Shuffle: Block Reshuffling and Reversal Schemes for Stochastic Optimization

[2604.00260] Learning to Shuffle: Block Reshuffling and Reversal Schemes for Stochastic Optimization

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2604.00260: Learning to Shuffle: Block Reshuffling and Reversal Schemes for Stochastic Optimization

Computer Science > Machine Learning arXiv:2604.00260 (cs) [Submitted on 31 Mar 2026] Title:Learning to Shuffle: Block Reshuffling and Reversal Schemes for Stochastic Optimization Authors:Lam M. Nguyen, Dzung T. Phan, Jayant Kalagnanam View a PDF of the paper titled Learning to Shuffle: Block Reshuffling and Reversal Schemes for Stochastic Optimization, by Lam M. Nguyen and 2 other authors View PDF HTML (experimental) Abstract:Shuffling strategies for stochastic gradient descent (SGD), including incremental gradient, shuffle-once, and random reshuffling, are supported by rigorous convergence analyses for arbitrary within-epoch permutations. In particular, random reshuffling is known to improve optimization constants relative to cyclic and shuffle-once schemes. However, existing theory offers limited guidance on how to design new data-ordering schemes that further improve optimization constants or stability beyond random reshuffling. In this paper, we design a pipeline using a large language model (LLM)-guided program evolution framework to discover an effective shuffling rule for without-replacement SGD. Abstracting from this instance, we identify two fundamental structural components: block reshuffling and paired reversal. We analyze these components separately and show that block reshuffling strictly reduces prefix-gradient variance constants within the unified shuffling framework, yielding provable improvements over random reshuffling under mild conditions. Separately, w...

Originally published on April 02, 2026. Curated by AI News.

Related Articles

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch
Llms

I can't help rooting for tiny open source AI model maker Arcee | TechCrunch

Arcee is a tiny 26-person U.S. startup that built a high-performing, massive, open source LLM. And it's gaining popularity with OpenClaw ...

TechCrunch - AI · 4 min ·
Machine Learning

We have an AI agent fragmentation problem

Every AI agent works fine on its own — but the moment you try to use more than one, everything falls apart. Different runtimes. Different...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

Using AI properly

AI is a tool. Period. I spent decades asking forums for help in writing HTML code for my website. I wanted my posts to self-scroll to a p...

Reddit - Artificial Intelligence · 1 min ·
Generative Ai

Google's Veo 3.1 Lite Cuts API Costs in Half as OpenAI's Sora Exits the Market

Google just cut Veo 3.1 API prices across the board today (April 7). Lite tier is now $0.05/sec — less than half the cost of Fast. Timing...

Reddit - Artificial Intelligence · 1 min ·

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime