[2503.03361] Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models

[2503.03361] Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2503.03361: Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models

Computer Science > Artificial Intelligence arXiv:2503.03361 (cs) [Submitted on 5 Mar 2025 (v1), last revised 25 Mar 2026 (this version, v3)] Title:Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models Authors:Shify Treger, Shimon Ullman View a PDF of the paper titled Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models, by Shify Treger and 1 other authors View PDF HTML (experimental) Abstract:Early in development, infants learn to extract surprisingly complex aspects of visual scenes. This early learning comes together with an initial understanding of the extracted concepts, such as their implications, causality, and using them to predict likely future events. In many cases, this learning is obtained with little or no supervision, and from relatively few examples, compared to current network models. Empirical studies of visual perception in early development have shown that in the domain of objects and human-object interactions, early-acquired concepts are often used in the process of learning additional, more complex concepts. In the current work, we model how early-acquired concepts are used in the learning of subsequent concepts, and compare the results with standard deep network modeling. We focused in particular on the use of the concepts of animacy and goal attribution in learning to predict future events in dynamic visual scenes. We show that the use of early concepts in...

Originally published on March 27, 2026. Curated by AI News.

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Llms

built an open source tool that auto generates AI context files for any codebase, 150 stars in

one of the most tedious parts of working with AI coding tools is having to manually write context files every single time. CLAUDE.md, .cu...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture

The BDH (Dragon Hatchling) paper (arXiv:2509.26507) describes a Hebbian synaptic plasticity mechanism where model weights update during i...

Reddit - Machine Learning · 1 min ·
Llms

[R] A language model built from the damped harmonic oscillator equation — no transformer blocks

I've been building a neural architecture where the only learnable transform is the transfer function of a damped harmonic oscillator: H(ω...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime