[2602.15379] FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations
Summary
The paper presents FlashMem, a memory streaming framework designed to optimize the execution of large-scale deep neural networks (DNNs) on mobile GPUs, significantly improving memory efficiency and inference speed.
Why It Matters
As mobile devices increasingly rely on deep learning for applications, optimizing DNN workloads is crucial for enhancing performance and user experience. FlashMem addresses the limitations of existing frameworks by enabling efficient execution of large models, which is vital for advancing mobile AI capabilities.
Key Takeaways
- FlashMem offers a novel memory streaming approach for DNNs on mobile GPUs.
- The framework achieves significant memory reduction (2.0x to 8.4x) and speedup (1.7x to 75.0x) compared to traditional methods.
- It dynamically streams model weights on demand, enhancing execution efficiency.
- FlashMem supports multi-DNN workloads, making it suitable for complex applications.
- The research highlights the importance of optimizing resource usage in mobile AI applications.
Computer Science > Distributed, Parallel, and Cluster Computing arXiv:2602.15379 (cs) [Submitted on 17 Feb 2026] Title:FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations Authors:Zhihao Shu, Md Musfiqur Rahman Sanim, Hangyu Zheng, Kunxiong Zhu, Miao Yin, Gagan Agrawal, Wei Niu View a PDF of the paper titled FlashMem: Supporting Modern DNN Workloads on Mobile with GPU Memory Hierarchy Optimizations, by Zhihao Shu and 6 other authors View PDF Abstract:The increasing size and complexity of modern deep neural networks (DNNs) pose significant challenges for on-device inference on mobile GPUs, with limited memory and computational resources. Existing DNN acceleration frameworks primarily deploy a weight preloading strategy, where all model parameters are loaded into memory before execution on mobile GPUs. We posit that this approach is not adequate for modern DNN workloads that comprise very large model(s) and possibly execution of several distinct models in succession. In this work, we introduce FlashMem, a memory streaming framework designed to efficiently execute large-scale modern DNNs and multi-DNN workloads while minimizing memory consumption and reducing inference latency. Instead of fully preloading weights, FlashMem statically determines model loading schedules and dynamically streams them on demand, leveraging 2.5D texture memory to minimize data transformations and improve execution efficiency. Experimental results on 11 models ...