[2602.13547] AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

[2602.13547] AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks

arXiv - AI 4 min read Article

Summary

The paper presents AISA, a novel defense mechanism for large language models (LLMs) that enhances safety against jailbreak attacks by activating intrinsic safety behaviors without extensive modifications.

Why It Matters

As LLMs become increasingly integrated into various applications, ensuring their safety against harmful outputs is crucial. AISA offers a lightweight, efficient solution that preserves model utility while improving robustness, making it significant for developers and researchers focused on AI safety.

Key Takeaways

  • AISA activates latent safety behaviors in LLMs without extensive fine-tuning.
  • The method uses spatiotemporal analysis to localize intrinsic safety awareness.
  • AISA achieves competitive performance with minimal overhead on small models.
  • The approach reduces false refusals while maintaining model utility.
  • Extensive testing across multiple datasets and models demonstrates its effectiveness.

Computer Science > Cryptography and Security arXiv:2602.13547 (cs) [Submitted on 14 Feb 2026] Title:AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks Authors:Weiming Song, Xuan Xie, Ruiping Yin View a PDF of the paper titled AISA: Awakening Intrinsic Safety Awareness in Large Language Models against Jailbreak Attacks, by Weiming Song and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) remain vulnerable to jailbreak prompts that elicit harmful or policy-violating outputs, while many existing defenses rely on expensive fine-tuning, intrusive prompt rewriting, or external guardrails that add latency and can degrade helpfulness. We present AISA, a lightweight, single-pass defense that activates safety behaviors already latent inside the model rather than treating safety as an add-on. AISA first localizes intrinsic safety awareness via spatiotemporal analysis and shows that intent-discriminative signals are broadly encoded, with especially strong separability appearing in the scaled dot-product outputs of specific attention heads near the final structural tokens before generation. Using a compact set of automatically selected heads, AISA extracts an interpretable prompt-risk score with minimal overhead, achieving detector-level performance competitive with strong proprietary baselines on small (7B) models. AISA then performs logits-level steering: it modulates the decoding distribution in proportion ...

Related Articles

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED
Llms

Anthropic Teams Up With Its Rivals to Keep AI From Hacking Everything | WIRED

The AI lab's Project Glasswing will bring together Apple, Google, and more than 45 other organizations. They'll use the new Claude Mythos...

Wired - AI · 7 min ·
Llms

The public needs to control AI-run infrastructure, labor, education, and governance— NOT private actors

A lot of discussion around AI is becoming siloed, and I think that is dangerous. People in AI-focused spaces often talk as if the only qu...

Reddit - Artificial Intelligence · 1 min ·
Llms

Agents that write their own code at runtime and vote on capabilities, no human in the loop

hollowOS just hit v4.4 and I added something that I haven’t seen anyone else do. Previous versions gave you an OS for agents: structured ...

Reddit - Artificial Intelligence · 1 min ·
Google Maps can now write captions for your photos using AI | TechCrunch
Llms

Google Maps can now write captions for your photos using AI | TechCrunch

Gemini can now create captions when users are looking to share a photo or video.

TechCrunch - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime