[2508.16703] ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference
About this article
Abstract page for arXiv paper 2508.16703: ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference
Computer Science > Performance arXiv:2508.16703 (cs) [Submitted on 22 Aug 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference Authors:Wangsong Yin, Daliang Xu, Mengwei Xu, Gang Huang, Xuanzhe Liu View a PDF of the paper titled ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference, by Wangsong Yin and 4 other authors View PDF HTML (experimental) Abstract:On-device running Large Language Models (LLMs) is nowadays a critical enabler towards preserving user privacy. We observe that the attention operator falls back from the special-purpose NPU to the general-purpose CPU/GPU because of quantization sensitivity in state-of-the-art frameworks. This fallback results in a degraded user experience and increased complexity in system scheduling. To this end, this paper presents shadowAttn, a system-algorithm codesigned sparse attention module with minimal reliance on CPU/GPU by only sparsely calculating the attention on a tiny portion of tokens. The key idea is to hide the overhead of estimating the important tokens with a NPU-based pilot compute. Further, shadowAttn proposes insightful techniques such as NPU compute graph bucketing, head-wise NPU-CPU/GPU pipeline and per-head fine-grained sparsity ratio to achieve high accuracy and efficiency. shadowAttn delivers the best performance with highly limited CPU/GPU resource; it requires much less CPU/GPU resource to deli...