Associative memory system for LLMs that learns during inference [P]
About this article
I've been working on MDA (Modular Dynamic Architecture), an online associative memory system for LLMs. Here's what I learned building it. The problem I was trying to solve RAG can't learn mid-conversation. If you introduce a new fact after indexing, it's invisible to retrieval. I wanted a system that could learn during inference without retraining. How MDA works Every concept becomes an Entity with a 256-dim identity vector. Entities are connected through a sparse synapse graph. New knowledge...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket