[2603.23822] How Vulnerable Are Edge LLMs?
About this article
Abstract page for arXiv paper 2603.23822: How Vulnerable Are Edge LLMs?
Computer Science > Cryptography and Security arXiv:2603.23822 (cs) [Submitted on 25 Mar 2026] Title:How Vulnerable Are Edge LLMs? Authors:Ao Ding, Hongzong Li, Zi Liang, Zhanpeng Shi, Shuxin Zhuang, Shiqin Tang, Rong Feng, Ping Lu View a PDF of the paper titled How Vulnerable Are Edge LLMs?, by Ao Ding and 7 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly deployed on edge devices under strict computation and quantization constraints, yet their security implications remain unclear. We study query-based knowledge extraction from quantized edge-deployed LLMs under realistic query budgets and show that, although quantization introduces noise, it does not remove the underlying semantic knowledge, allowing substantial behavioral recovery through carefully designed queries. To systematically analyze this risk, we propose \textbf{CLIQ} (\textbf{Cl}ustered \textbf{I}nstruction \textbf{Q}uerying), a structured query construction framework that improves semantic coverage while reducing redundancy. Experiments on quantized Qwen models (INT8/INT4) demonstrate that CLIQ consistently outperforms original queries across BERTScore, BLEU, and ROUGE, enabling more efficient extraction under limited budgets. These results indicate that quantization alone does not provide effective protection against query-based extraction, highlighting a previously underexplored security risk in edge-deployed LLMs. Subjects: Cryptography and Security (cs.CR); ...