[2502.10001] EmbBERT: Attention Under 2 MB Memory
About this article
Abstract page for arXiv paper 2502.10001: EmbBERT: Attention Under 2 MB Memory
Computer Science > Computation and Language arXiv:2502.10001 (cs) [Submitted on 14 Feb 2025 (v1), last revised 24 Mar 2026 (this version, v3)] Title:EmbBERT: Attention Under 2 MB Memory Authors:Riccardo Bravin, Massimo Pavan, Hazem Hesham Yousef Shalby, Fabrizio Pittorino, Manuel Roveri View a PDF of the paper titled EmbBERT: Attention Under 2 MB Memory, by Riccardo Bravin and 4 other authors View PDF HTML (experimental) Abstract:Transformer architectures based on the attention mechanism have revolutionized natural language processing (NLP), driving major breakthroughs across virtually every NLP task. However, their substantial memory and computational requirements still hinder deployment on ultra-constrained devices such as wearables and Internet-of-Things (IoT) units, where available memory is limited to just a few megabytes. To address this challenge, we introduce EmbBERT, a tiny language model (TLM) architecturally designed for extreme efficiency. The model integrates a compact embedding layer, streamlined feed-forward blocks, and an efficient attention mechanism that together enable optimal performance under strict memory budgets. Through this redesign for the extreme edge, we demonstrate that highly simplified transformer architectures remain remarkably effective under tight resource constraints. EmbBERT requires only 2 MB of total memory, and achieves accuracy performance comparable to the ones of state-of-the-art (SotA) models that require a $\mathbf{10\times}$ mem...