[2603.28817] GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models
About this article
Abstract page for arXiv paper 2603.28817: GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models
Computer Science > Cryptography and Security arXiv:2603.28817 (cs) [Submitted on 28 Mar 2026] Title:GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models Authors:Md Jueal Mia, Joaquin Molto, Yanzhao Wu, M. Hadi Amini View a PDF of the paper titled GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models, by Md Jueal Mia and 3 other authors View PDF HTML (experimental) Abstract:Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token act...