The Scaling Bandaid is Wearing Thin (And Nobody Wants to Admit It)
About this article
Let me be direct: we’ve hit a wall with scaling, and the entire field is kind of bullshitting about what comes next. I’ve spent enough time in research circles to know this isn’t controversial, people just don’t say it publicly because there’s too much money involved. Here’s the thing. Every major lab is operating under the same assumption: if we just throw enough compute at the problem, language models will eventually think. GPT-4 → GPT-5. Claude 3 → Claude 4. Llama keeps getting bigger. And...
You've been blocked by network security.To continue, log in to your Reddit account or use your developer tokenIf you think you've been blocked by mistake, file a ticket below and we'll look into it.Log in File a ticket