[2503.03361] Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models
About this article
Abstract page for arXiv paper 2503.03361: Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models
Computer Science > Artificial Intelligence arXiv:2503.03361 (cs) [Submitted on 5 Mar 2025 (v1), last revised 25 Mar 2026 (this version, v3)] Title:Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models Authors:Shify Treger, Shimon Ullman View a PDF of the paper titled Concepts Learned Visually by Infants Can Contribute to Visual Learning and Understanding in AI Models, by Shify Treger and 1 other authors View PDF HTML (experimental) Abstract:Early in development, infants learn to extract surprisingly complex aspects of visual scenes. This early learning comes together with an initial understanding of the extracted concepts, such as their implications, causality, and using them to predict likely future events. In many cases, this learning is obtained with little or no supervision, and from relatively few examples, compared to current network models. Empirical studies of visual perception in early development have shown that in the domain of objects and human-object interactions, early-acquired concepts are often used in the process of learning additional, more complex concepts. In the current work, we model how early-acquired concepts are used in the learning of subsequent concepts, and compare the results with standard deep network modeling. We focused in particular on the use of the concepts of animacy and goal attribution in learning to predict future events in dynamic visual scenes. We show that the use of early concepts in...