[2602.20585] Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness

[2602.20585] Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the conditions under which learning is achievable in online and private settings, focusing on generalized smoothness and its implications for sample complexity and regret bounds.

Why It Matters

Understanding the minimal assumptions needed for effective learning under distributional constraints is crucial for advancing machine learning theory. This research bridges gaps between traditional learning paradigms and offers insights into online learning and differential privacy, making it relevant for both theoretical and practical applications in AI.

Key Takeaways

  • Generalized smoothness characterizes learnability under distributional adversaries.
  • The paper provides universal algorithms achieving low regret without prior knowledge of the distribution family.
  • Refined bounds are established when the distribution family is known, enhancing understanding of sample complexity.

Statistics > Machine Learning arXiv:2602.20585 (stat) [Submitted on 24 Feb 2026] Title:Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness Authors:Moïse Blanchard, Abhishek Shetty, Alexander Rakhlin View a PDF of the paper titled Characterizing Online and Private Learnability under Distributional Constraints via Generalized Smoothness, by Mo\"ise Blanchard and 2 other authors View PDF HTML (experimental) Abstract:Understanding minimal assumptions that enable learning and generalization is perhaps the central question of learning theory. Several celebrated results in statistical learning theory, such as the VC theorem and Littlestone's characterization of online learnability, establish conditions on the hypothesis class that allow for learning under independent data and adversarial data, respectively. Building upon recent work bridging these extremes, we study sequential decision making under distributional adversaries that can adaptively choose data-generating distributions from a fixed family $U$ and ask when such problems are learnable with sample complexity that behaves like the favorable independent case. We provide a near complete characterization of families $U$ that admit learnability in terms of a notion known as generalized smoothness i.e. a distribution family admits VC-dimension-dependent regret bounds for every finite-VC hypothesis class if and only if it is generalized smooth. Further, we give universal al...

Related Articles

Machine Learning

[D] ICML 2026 Average Score

Hi all, I’m curious about the current review dynamics for ICML 2026, especially after the rebuttal phase. For those who are reviewers (or...

Reddit - Machine Learning · 1 min ·
Machine Learning

[R] VOID: Video Object and Interaction Deletion (physically-consistent video inpainting)

We present VOID, a model for video object removal that aims to handle *physical interactions*, not just appearance. Most existing video i...

Reddit - Machine Learning · 1 min ·
Machine Learning

FLUX 2 Pro (2026) Sketch to Image

I sketched a cow and tested how different models interpret it into a realistic image for downstream 3D generation, turns out some models ...

Reddit - Artificial Intelligence · 1 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime