[2603.25157] Vision Hopfield Memory Networks
About this article
Abstract page for arXiv paper 2603.25157: Vision Hopfield Memory Networks
Computer Science > Machine Learning arXiv:2603.25157 (cs) [Submitted on 26 Mar 2026] Title:Vision Hopfield Memory Networks Authors:Jianfeng Wang, Amine M'Charrak, Luk Koska, Xiangtao Wang, Daniel Petriceanu, Mykyta Smyrnov, Ruizhi Wang, Michael Bumbar, Luca Pinchetti, Thomas Lukasiewicz View a PDF of the paper titled Vision Hopfield Memory Networks, by Jianfeng Wang and 9 other authors View PDF HTML (experimental) Abstract:Recent vision and multimodal foundation backbones, such as Transformer families and state-space models like Mamba, have achieved remarkable progress, enabling unified modeling across images, text, and beyond. Despite their empirical success, these architectures remain far from the computational principles of the human brain, often demanding enormous amounts of training data while offering limited interpretability. In this work, we propose the Vision Hopfield Memory Network (V-HMN), a brain-inspired foundation backbone that integrates hierarchical memory mechanisms with iterative refinement updates. Specifically, V-HMN incorporates local Hopfield modules that provide associative memory dynamics at the image patch level, global Hopfield modules that function as episodic memory for contextual modulation, and a predictive-coding-inspired refinement rule for iterative error correction. By organizing these memory-based modules hierarchically, V-HMN captures both local and global dynamics in a unified framework. Memory retrieval exposes the relationship between...