[2508.16703] ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference

[2508.16703] ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference

arXiv - AI 3 min read

About this article

Abstract page for arXiv paper 2508.16703: ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference

Computer Science > Performance arXiv:2508.16703 (cs) [Submitted on 22 Aug 2025 (v1), last revised 5 Apr 2026 (this version, v2)] Title:ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference Authors:Wangsong Yin, Daliang Xu, Mengwei Xu, Gang Huang, Xuanzhe Liu View a PDF of the paper titled ShadowNPU: System and Algorithm Co-design for NPU-Centric On-Device LLM Inference, by Wangsong Yin and 4 other authors View PDF HTML (experimental) Abstract:On-device running Large Language Models (LLMs) is nowadays a critical enabler towards preserving user privacy. We observe that the attention operator falls back from the special-purpose NPU to the general-purpose CPU/GPU because of quantization sensitivity in state-of-the-art frameworks. This fallback results in a degraded user experience and increased complexity in system scheduling. To this end, this paper presents shadowAttn, a system-algorithm codesigned sparse attention module with minimal reliance on CPU/GPU by only sparsely calculating the attention on a tiny portion of tokens. The key idea is to hide the overhead of estimating the important tokens with a NPU-based pilot compute. Further, shadowAttn proposes insightful techniques such as NPU compute graph bucketing, head-wise NPU-CPU/GPU pipeline and per-head fine-grained sparsity ratio to achieve high accuracy and efficiency. shadowAttn delivers the best performance with highly limited CPU/GPU resource; it requires much less CPU/GPU resource to deli...

Originally published on April 07, 2026. Curated by AI News.

Related Articles

Llms

Things I got wrong building a confidence evaluator for local LLMs [D]

I've been building **Autodidact**, a local-first AI agent framework. The central piece is a **confidence evaluator** - something that dec...

Reddit - Machine Learning · 1 min ·
Llms

I’m convinced 90% of you building "AI Agents" are just burning money on proxy providers. [D]

Seriously, I just audited my stack and realized I’m spending more on rotating residential proxies than I am on the actual Claude and Open...

Reddit - Machine Learning · 1 min ·
Llms

How do you test AI agents in production? The unpredictability is overwhelming.[D]

I’ve been in QA for almost a decade. My mental model for quality was always: given input X, assert output Y. Now I’m on a team that’s shi...

Reddit - Machine Learning · 1 min ·
Llms

Confusing Website

i'm trying to find a video online and couldn't so i asked ChatGPT by describing the video and i was given a link and i'm trying to make s...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime