[2604.03190] Gradient Boosting within a Single Attention Layer

[2604.03190] Gradient Boosting within a Single Attention Layer

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2604.03190: Gradient Boosting within a Single Attention Layer

Computer Science > Machine Learning arXiv:2604.03190 (cs) [Submitted on 3 Apr 2026] Title:Gradient Boosting within a Single Attention Layer Authors:Saleh Sargolzaei View a PDF of the paper titled Gradient Boosting within a Single Attention Layer, by Saleh Sargolzaei View PDF HTML (experimental) Abstract:Transformer attention computes a single softmax-weighted average over values -- a one-pass estimate that cannot correct its own errors. We introduce \emph{gradient-boosted attention}, which applies the principle of gradient boosting \emph{within} a single attention layer: a second attention pass, with its own learned projections, attends to the prediction error of the first and applies a gated correction. Under a squared reconstruction objective, the construction maps onto Friedman's gradient boosting machine, with each attention pass as a base learner and the per-dimension gate as the shrinkage parameter. We show that a single Hopfield-style update erases all query information orthogonal to the stored-pattern subspace, and that further iteration under local contraction can collapse distinct queries in the same region to the same fixed point. We also show that separate projections for the correction pass can recover residual information inaccessible to the shared-projection approach of Tukey's twicing. On a 10M-token subset of WikiText-103, gradient-boosted attention achieves a test perplexity of $67.9$ compared to $72.2$ for standard attention, $69.6$ for Twicing Attention...

Originally published on April 06, 2026. Curated by AI News.

Related Articles

Machine Learning

Cold start latency on GPU cloud platforms in 2026 — p99 specifically, not p50. Anyone have real data? [D]

doing infrastructure evaluation for inference workloads and running into the same problem everywhere: every platform publishes p50 cold s...

Reddit - Machine Learning · 1 min ·
Google’s Gemini AI can answer your questions with 3D models and simulations | The Verge
Llms

Google’s Gemini AI can answer your questions with 3D models and simulations | The Verge

Google is rolling out a new feature for its Gemini AI chatbot, allowing the tool to generate 3D models and simulations to explain the con...

The Verge - AI · 4 min ·
Machine Learning

Flux maintains facial geometry and spatial coherence across 5 sequential iterative edits - is anything else doing this at this level?

One woman. 5 Different Prompts. Perfect Contextual Preservation Playing around with Flux again and thought I'll try it with a model chang...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

[P] PCA before truncation makes non-Matryoshka embeddings compressible: results on BGE-M3 [P]

Most embedding models are not Matryoshka-trained, so naive dimension truncation tends to destroy them. I tested a simple alternative: fit...

Reddit - Machine Learning · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime