[D] Probabilistic Neuron Activation in Predictive Coding Algorithm using 1 Bit LLM Architecture
About this article
If we use Predictive Coding architecture we wouldn't need backpropogation anymore which would work well for a non deterministic system that depends on randomness. Since each neuron just activates or doesn't activate we could use the 1 bit LLM architecture and control the activations with calculated chance. This would increase efficiency and memory used with the proper stochastic hardware. Instead of expecting AI to generate a proper output in 1 attempt we could make it constantly re prompt it...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket