[2209.14267] Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets

[2209.14267] Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2209.14267: Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets

Computer Science > Machine Learning arXiv:2209.14267 (cs) [Submitted on 28 Sep 2022 (v1), last revised 29 Mar 2026 (this version, v3)] Title:Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets Authors:Deborah Pereg, Martin Villiger, Brett Bouma, Polina Golland View a PDF of the paper titled Less is More: Rethinking Few-Shot Learning and Recurrent Neural Nets, by Deborah Pereg and 3 other authors View PDF HTML (experimental) Abstract:The statistical supervised learning framework assumes an input-output set with a joint probability distribution that is reliably represented by the training dataset. The learner is then required to output a prediction rule learned from the training dataset's input-output pairs. In this work, we provide meaningful insights into the asymptotic equipartition property (AEP) \citep{Shannon:1948} in the context of machine learning, and illuminate some of its potential ramifications for few-shot learning. We provide theoretical guarantees for reliable learning under the information-theoretic AEP, and for the generalization error with respect to the sample size. We then focus on a highly efficient recurrent neural net (RNN) framework and propose a reduced-entropy algorithm for few-shot learning. We also propose a mathematical intuition for the RNN as an approximation of a sparse coding solver. We verify the applicability, robustness, and computational efficiency of the proposed approach with image deblurring and optical coherence tomog...

Originally published on March 31, 2026. Curated by AI News.

Related Articles

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.
Machine Learning

How Dangerous Is Anthropic’s New AI Model? Its Chief Science Officer Explains.

Anthropic says Mythos is so dangerous that the company is slowing its release. We asked Jared Kaplan why.

AI Tools & Products · 3 min ·
Llms

Built an political benchmark for LLMs. KIMI K2 can't answer about Taiwan (Obviously). GPT-5.3 refuses 100% of questions when given an opt-out. [P]

I spent the few days building a benchmark that maps where frontier LLMs fall on a 2D political compass (economic left/right + social prog...

Reddit - Machine Learning · 1 min ·
UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime