[2602.15379] FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations

[2602.15379] FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations

arXiv - Machine Learning 4 min read Article

Summary

The paper presents FlashMem, a memory streaming framework designed to optimize the execution of large-scale deep neural networks (DNNs) on mobile GPUs, significantly improving memory efficiency and inference speed.

Why It Matters

As mobile devices increasingly rely on deep learning for applications, optimizing DNN workloads is crucial for enhancing performance and user experience. FlashMem addresses the limitations of existing frameworks by enabling efficient execution of large models, which is vital for advancing mobile AI capabilities.

Key Takeaways

  • FlashMem offers a novel memory streaming approach for DNNs on mobile GPUs.
  • The framework achieves significant memory reduction (2.0x to 8.4x) and speedup (1.7x to 75.0x) compared to traditional methods.
  • It dynamically streams model weights on demand, enhancing execution efficiency.
  • FlashMem supports multi-DNN workloads, making it suitable for complex applications.
  • The research highlights the importance of optimizing resource usage in mobile AI applications.

Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2602.15379 (cs) [Submitted on 17 Feb 2026] Title:FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations Authors:Zhihao Shu, Md Musfiqur Rahman Sanim, Hangyu Zheng, Kunxiong Zhu, Miao Yin, Gagan Agrawal, Wei Niu View a PDF of the paper titled FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations, by Zhihao Shu and 6 other authors View PDF Abstract:The increasing size and complexity of modern deep neural networks (DNNs) pose significant challenges for on-device inference on mobile GPUs, with limited memory and computational resources. Existing DNN acceleration frameworks primarily deploy a weight preloading strategy, where all model parameters are loaded into memory before execution on mobile GPUs. We posit that this approach is not adequate for modern DNN workloads that comprise very large model(s) and possibly execution of several distinct models in succession. In this work, we introduce FlashMem, a memory streaming framework designed to efficiently execute large-scale modern DNNs and multi-DNN workloads while minimizing memory consumption and reducing inference latency. Instead of fully preloading weights, FlashMem statically determines model loading schedules and dynamically streams them on demand, leveraging 2.5D texture memory to minimize data transformations and improve execution efficiency. Experimental results on 11 models ...

Related Articles

UMKC Announces New Master of Science in Artificial Intelligence
Ai Infrastructure

UMKC Announces New Master of Science in Artificial Intelligence

UMKC announces a new Master of Science in Artificial Intelligence program aimed at addressing workforce demand for AI expertise, set to l...

AI News - General · 4 min ·
Improving AI models’ ability to explain their predictions
Machine Learning

Improving AI models’ ability to explain their predictions

AI News - General · 9 min ·
Generalist AI unveils GEN-1 model, claiming breakthrough in real-world robotic task performance
Machine Learning

Generalist AI unveils GEN-1 model, claiming breakthrough in real-world robotic task performance

Generalist AI has launched GEN-1, a robotics model achieving 99% success rates and faster task performance, advancing the development of ...

AI News - General · 6 min ·
New AI model sparks alarm as governments brace for AI-driven cyberattacks
Machine Learning

New AI model sparks alarm as governments brace for AI-driven cyberattacks

AI Tools & Products · 6 min ·
More in Machine Learning: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime