[D] If reasoning requires optimization rather than generation, what does that mean for the scaling paradigm?
About this article
Been digging into the architectural differences between autoregressive LLMs and Energy-Based Models (EBMs) for reasoning tasks, especially with LeCun's recent push towards optimization-based architectures. The premise is that true reasoning should be an optimization problem (finding a state that minimizes an energy function satisfying constraints), rather than next-token prediction. If reasoning inherently requires this optimization loop, does brute-force scaling of autoregressive models hit ...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket