[2601.21708] FBS: Modeling Native Parallel Reading inside a Transformer

[2601.21708] FBS: Modeling Native Parallel Reading inside a Transformer

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2601.21708: FBS: Modeling Native Parallel Reading inside a Transformer

Computer Science > Artificial Intelligence arXiv:2601.21708 (cs) [Submitted on 29 Jan 2026 (v1), last revised 8 Apr 2026 (this version, v2)] Title:FBS: Modeling Native Parallel Reading inside a Transformer Authors:Tongxi Wang View a PDF of the paper titled FBS: Modeling Native Parallel Reading inside a Transformer, by Tongxi Wang View PDF HTML (experimental) Abstract:Large language models (LLMs) excel across many tasks, yet inference is still dominated by strictly token-by-token autoregression. Existing acceleration methods largely patch this pipeline and miss core human-reading ingredients: content-adaptive foresight, chunk-structure-aware compute allocation, and train-test consistency for preview/skimming. We propose the Fovea-Block-Skip Transformer (FBS), which injects a causal, trainable loop into Transformers via Parafovea-Attention Window (PAW), Chunk-Head (CH), and Skip-Gate (SG). Across diverse benchmarks, FBS improves the quality-efficiency trade-off without increasing parameters, and ablations show the three modules are complementary. Comments: Subjects: Artificial Intelligence (cs.AI); Computation and Language (cs.CL) Cite as: arXiv:2601.21708 [cs.AI]   (or arXiv:2601.21708v2 [cs.AI] for this version)   https://doi.org/10.48550/arXiv.2601.21708 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Tongxi Wang [view email] [v1] Thu, 29 Jan 2026 13:39:55 UTC (28,828 KB) [v2] Wed, 8 Apr 2026 10:39:08 UTC (28,831 KB) Full-text links: Access Pape...

Originally published on April 09, 2026. Curated by AI News.

Related Articles

Llms

Diffusion for generating/editing ASTs? [D]

I’m not a machine learning expert or anything, but I do enjoy learning about how it all works. I’ve noticed that one of the main limitati...

Reddit - Machine Learning · 1 min ·
ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge
Llms

ChatGPT’s ‘Trusted Contact’ will alert loved ones of safety concerns | The Verge

OpenAI is launching an optional safety feature for ChatGPT that allows adult users to assign an emergency contact for mental health and s...

The Verge - AI · 4 min ·
Llms

AI is helpful but still not “there” yet

what I mean is that every time I use Claude, or Grok or any of the AI platforms and tools, I realize how far this technology is from repl...

Reddit - Artificial Intelligence · 1 min ·
ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED
Llms

ChatGPT Has 'Goblin' Mania in the US. In China It Will 'Catch You Steadily' | WIRED

OpenAI's chatbot has some weird linguistic tics in Chinese that are driving users crazy.

Wired - AI · 8 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime