[2602.14495] Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs

[2602.14495] Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs

arXiv - Machine Learning 4 min read Article

Summary

This paper explores the scaling laws of Gated Linear Units (GLUs) compared to Multi-Layer Perceptrons (MLPs), demonstrating that GLUs scale asymptotically faster due to their quadratic functional forms.

Why It Matters

Understanding the scaling behavior of different neural network architectures is crucial for advancing machine learning models. This research provides insights into the design of more efficient models, potentially impacting various applications in AI and machine learning.

Key Takeaways

  • GLUs exhibit asymptotic scaling behavior that outpaces MLPs.
  • The paper introduces the Gated Quadratic Unit, which promises even better scaling.
  • Numerical analysis is applied to understand model architecture choices.
  • Empirical verification supports the theoretical findings on scaling slopes.
  • The research opens avenues for designing superior large models based on first principles.

Computer Science > Machine Learning arXiv:2602.14495 (cs) [Submitted on 16 Feb 2026] Title:Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs Authors:Alejandro Francisco Queiruga View a PDF of the paper titled Divine Benevolence is an $x^2$: GLUs scale asymptotically faster than MLPs, by Alejandro Francisco Queiruga View PDF HTML (experimental) Abstract:Scaling laws can be understood from ground-up numerical analysis, where traditional function approximation theory can explain shifts in model architecture choices. GLU variants now dominate frontier LLMs and similar outer-product architectures are prevalent in ranking models. The success of these architectures has mostly been left as an empirical discovery. In this paper, we apply the tools of numerical analysis to expose a key factor: these models have an $x^2$ which enables \emph{asymptotically} faster scaling than MLPs. GLUs have piecewise quadratic functional forms that are sufficient to exhibit quadratic order of approximation. Our key contribution is to demonstrate that the $L(P)$ scaling slope is $L(P)\propto P^{-3}$ for GLUs but only $L(P)=P^{-2}$ for MLPs on function reconstruction problems. We provide a parameter construction and empirical verification of these slopes for 1D function approximation. From the first principles we discover, we make one stride and propose the ``Gated Quadratic Unit'' which has an even steeper $L(P)$ slope than the GLU and MLP. This opens the possibility of archi...

Related Articles

Anthropic temporarily banned OpenClaw's creator from accessing Claude | TechCrunch
Llms

Anthropic temporarily banned OpenClaw's creator from accessing Claude | TechCrunch

This ban took place after Claude's pricing changed for OpenClaw users last week.

TechCrunch - AI · 5 min ·
Llms

I probably shouldn't be impressed, but I am.

So I just made this workout on a whiteboard and I was feeling lazy so I asked Claude to read it. And it did, almost flawlessly. I was and...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Vulnerabilities but Solvable

I recognized that while I was using Claude that the inputs and decision making of the AI has perception of worry and concern for the user...

Reddit - Artificial Intelligence · 1 min ·
Llms

OpenAI & Anthropic’s CEOs Wouldn't Hold Hands, but Their Models Fell in Love In An LLM Dating Show

People ask AI relationship questions all the time, from "Does this person like me?" to "Should I text back?" But have you ever thought ab...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime