[R] First open-source implementation of Hebbian fast-weight write-back for the BDH architecture
About this article
The BDH (Dragon Hatchling) paper (arXiv:2509.26507) describes a Hebbian synaptic plasticity mechanism where model weights update during inference. The released code computes the co-activation product and discards it, the write-back was never implemented publicly. I implemented it. The model rewrites its own decoder weights during inference using sparse activation codes as addresses. Same token always produces the same code regardless of position. Consolidation (v2): Once episodic fast weights...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket