[2603.20696] High-dimensional online learning via asynchronous decomposition: Non-divergent results, dynamic regularization, and beyond

[2603.20696] High-dimensional online learning via asynchronous decomposition: Non-divergent results, dynamic regularization, and beyond

arXiv - Machine Learning 3 min read

About this article

Abstract page for arXiv paper 2603.20696: High-dimensional online learning via asynchronous decomposition: Non-divergent results, dynamic regularization, and beyond

Statistics > Machine Learning arXiv:2603.20696 (stat) [Submitted on 21 Mar 2026] Title:High-dimensional online learning via asynchronous decomposition: Non-divergent results, dynamic regularization, and beyond Authors:Shixiang Liu, Zhifan Li, Hanming Yang, Jianxin Yin View a PDF of the paper titled High-dimensional online learning via asynchronous decomposition: Non-divergent results, dynamic regularization, and beyond, by Shixiang Liu and 3 other authors View PDF HTML (experimental) Abstract:Existing high-dimensional online learning methods often face the challenge that their error bounds, or per-batch sample sizes, diverge as the number of data batches increases. To address this issue, we propose an asynchronous decomposition framework that leverages summary statistics to construct a surrogate score function for current-batch learning. This framework is implemented via a dynamic-regularized iterative hard thresholding algorithm, providing a computationally and memory-efficient solution for sparse online optimization. We provide a unified theoretical analysis that accounts for both the streaming computational error and statistical accuracy, establishing that our estimator maintains non-divergent error bounds and $\ell_0$ sparsity across all batches. Furthermore, the proposed estimator adaptively achieves additional gains as batches accumulate, attaining the oracle accuracy as if the entire historical dataset were accessible and the true support were known. These theoretic...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Machine Learning

[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture

The BDH (Dragon Hatchling) paper (arXiv:2509.26507) describes a Hebbian synaptic plasticity mechanism where model weights update during i...

Reddit - Machine Learning · 1 min ·
Machine Learning

[D] Could really use some guidance . I'm a 2nd year Data Science UG Student

I'm currently finishing up my second year of a three year Bachelor of Data Science degree. I've got the basics down quite well, linear re...

Reddit - Machine Learning · 1 min ·
Machine Learning

[P] Create datasets from TikTok videos

For ML experiments and RAG projects: Tikkocampus converts creator timelines into timestamped, searchable segments and then use it to perf...

Reddit - Machine Learning · 1 min ·
Memory chip giant SK hynix could help end 'RAMmageddon' with blockbuster US IPO | TechCrunch
Nlp

Memory chip giant SK hynix could help end 'RAMmageddon' with blockbuster US IPO | TechCrunch

SK hynix’s potential U.S. listing could raise $10-$14 billion to help it build more capacity, encourage others to follow, and end the 'RA...

TechCrunch - AI · 6 min ·
More in Nlp: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime