[2503.23608] Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's
Summary
This paper explores a high-dimensional computing architecture that mimics biological learning processes, proposing a model that integrates concepts from psychology, biology, and traditional computing.
Why It Matters
Understanding autonomous learning through a biologically inspired computing architecture could revolutionize AI development, particularly in creating more efficient learning systems that operate similarly to human cognition. This research bridges gaps between neuroscience and machine learning, paving the way for advancements in robotics and AI applications.
Key Takeaways
- The proposed architecture uses high-dimensional vectors to model learning akin to human and animal cognition.
- It emphasizes the importance of integrating psychological and biological principles into computing theories.
- The architecture aims for energy efficiency comparable to biological systems, potentially impacting future AI designs.
- Applications in robotics and language processing are anticipated, highlighting the versatility of the proposed model.
- The research calls for large-scale experiments to validate the theoretical framework.
Computer Science > Machine Learning arXiv:2503.23608 (cs) [Submitted on 30 Mar 2025 (v1), last revised 21 Feb 2026 (this version, v2)] Title:Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's Authors:Pentti Kanerva View a PDF of the paper titled Autonomous Learning with High-Dimensional Computing Architecture Similar to von Neumann's, by Pentti Kanerva View PDF HTML (experimental) Abstract:We model human and animal learning by computing with high-dimensional vectors (H = 10,000 for example). The architecture resembles traditional (von Neumann) computing with numbers, but the instructions refer to vectors and operate on them in superposition. The architecture includes a high-capacity memory for vectors, analogue of the random-access memory (RAM) for numbers. The model's ability to learn from data reminds us of deep learning, but with an architecture closer to biology. The architecture agrees with an idea from psychology that human memory and learning involve a short-term working memory and a long-term data store. Neuroscience provides us with a model of the long-term memory, namely, the cortex of the cerebellum. With roots in psychology, biology, and traditional computing, a theory of computing with vectors can help us understand how brains compute. Application to learning by robots seems inevitable, but there is likely to be more, including language. Ultimately we want to compute with no more material and energy than used by brains. T...