[2602.24281] Memory Caching: RNNs with Growing Memory

[2602.24281] Memory Caching: RNNs with Growing Memory

arXiv - Machine Learning 4 min read

About this article

Abstract page for arXiv paper 2602.24281: Memory Caching: RNNs with Growing Memory

Computer Science > Machine Learning arXiv:2602.24281 (cs) [Submitted on 27 Feb 2026] Title:Memory Caching: RNNs with Growing Memory Authors:Ali Behrouz, Zeman Li, Yuan Deng, Peilin Zhong, Meisam Razaviyayn, Vahab Mirrokni View a PDF of the paper titled Memory Caching: RNNs with Growing Memory, by Ali Behrouz and Zeman Li and Yuan Deng and Peilin Zhong and Meisam Razaviyayn and Vahab Mirrokni View PDF HTML (experimental) Abstract:Transformers have been established as the de-facto backbones for most recent advances in sequence modeling, mainly due to their growing memory capacity that scales with the context length. While plausible for retrieval tasks, it causes quadratic complexity and so has motivated recent studies to explore viable subquadratic recurrent alternatives. Despite showing promising preliminary results in diverse domains, such recurrent architectures underperform Transformers in recall-intensive tasks, often attributed to their fixed-size memory. In this paper, we introduce Memory Caching (MC), a simple yet effective technique that enhances recurrent models by caching checkpoints of their memory states (a.k.a. hidden states). Memory Caching allows the effective memory capacity of RNNs to grow with sequence length, offering a flexible trade-off that interpolates between the fixed memory (i.e., $O(L)$ complexity) of RNNs and the growing memory (i.e., $O(L^2)$ complexity) of Transformers. We propose four variants of MC, including gated aggregation and sparse sele...

Originally published on March 02, 2026. Curated by AI News.

Related Articles

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted | WIRED
Machine Learning

AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted | WIRED

A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind.

Wired - AI · 6 min ·
Machine Learning

[R] Literature on optimizing user feedback in the form of Thumbs up/ Thumbs down?

I am working in a project where I have a dataset of model responses tagged with "thumbs up" or "thumbs down" by the user. That's all the ...

Reddit - Machine Learning · 1 min ·
Machine Learning

Diffusion-based AI model successfully trained in electroplating

Electrochemical deposition, or electroplating, is a common industrial technique that coats materials to improve corrosion resistance and ...

Reddit - Artificial Intelligence · 1 min ·
Machine Learning

AI model can detect multiple cognitive brain diseases from a single blood sample

The symptom profiles of different neurodegenerative diseases often overlap, and diagnosing age-related cognitive symptoms is complex. A p...

Reddit - Artificial Intelligence · 1 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime