[2508.10765] Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins
Summary
This article explores the mechanisms of memorization and forgetting in Hopfield neural networks, revealing how bifurcations affect memory formation and loss during learning processes.
Why It Matters
Understanding how Hopfield neural networks manage memory formation and catastrophic forgetting is crucial for improving artificial intelligence systems. This research provides insights that could lead to more robust neural networks, mitigating issues related to memory retention and spurious learning.
Key Takeaways
- Hopfield networks utilize bifurcation mechanisms to form and destroy memory attractors.
- The study reveals a connection between memory formation and catastrophic forgetting in neural networks.
- New categories in learning are represented by the basins of newly formed attractors.
- The research offers a universal strategy applicable to various recurrent neural networks.
- Insights from this study could help develop methods to reduce memory-related flaws in AI.
Mathematics > Dynamical Systems arXiv:2508.10765 (math) [Submitted on 14 Aug 2025 (v1), last revised 15 Feb 2026 (this version, v2)] Title:Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins Authors:Adam E. Essex (1), Natalia B. Janson (1), Rachel A. Norris (1), Alexander G. Balanov (1) ((1) Loughborough University, England) View a PDF of the paper titled Memorisation and forgetting in a learning Hopfield neural network: bifurcation mechanisms, attractors and basins, by Adam E. Essex (1) and 4 other authors View PDF HTML (experimental) Abstract:Despite explosive expansion of artificial intelligence based on artificial neural networks (ANNs), these are employed as "black boxes'', as it is unclear how, during learning, they form memories or develop unwanted features, including spurious memories and catastrophic forgetting. Much research is available on isolated aspects of learning ANNs, but due to their high dimensionality and non-linearity, their comprehensive analysis remains a challenge. In ANNs, knowledge is thought to reside in connection weights or in attractor basins, but these two paradigms are not linked explicitly. Here we comprehensively analyse mechanisms of memory formation in an 81-neuron Hopfield network undergoing Hebbian learning by revealing bifurcations leading to formation and destruction of attractors and their basin boundaries. We show that, by affecting evolution of connection weights, the ap...