[2602.13547] AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks
Summary
The paper presents AISA, a novel defense mechanism for large language models (LLMs) that enhances safety against jailbreak attacks by activating intrinsic safety behaviors without extensive modifications.
Why It Matters
As LLMs become increasingly integrated into various applications, ensuring their safety against harmful outputs is crucial. AISA offers a lightweight, efficient solution that preserves model utility while improving robustness, making it significant for developers and researchers focused on AI safety.
Key Takeaways
- AISA activates latent safety behaviors in LLMs without extensive fine-tuning.
- The method uses spatiotemporal analysis to localize intrinsic safety awareness.
- AISA achieves competitive performance with minimal overhead on small models.
- The approach reduces false refusals while maintaining model utility.
- Extensive testing across multiple datasets and models demonstrates its effectiveness.
Computer Science > Cryptography and Security arXiv:2602.13547 (cs) [Submitted on 14 Feb 2026] Title:AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks Authors:Weiming Song, Xuan Xie, Ruiping Yin View a PDF of the paper titled AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks, by Weiming Song and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion ...