[2602.19396] Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement

[2602.19396] Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement

arXiv - AI 4 min read Article

Summary

This paper presents a novel framework for detecting concealed jailbreaks in large language models (LLMs) by disentangling semantic factors in model activations, enhancing anomaly detection and interpretability.

Why It Matters

As LLMs become increasingly integrated into various applications, their vulnerability to sophisticated jailbreak prompts poses significant risks. This research addresses these vulnerabilities by introducing a self-supervised method that improves detection and enhances the safety and interpretability of LLMs, making it crucial for developers and researchers in AI safety.

Key Takeaways

  • Introduces a self-supervised framework for detecting concealed jailbreaks in LLMs.
  • Develops GoalFrameBench, a dataset for training models on goal and framing variations.
  • Presents FrameShield, an anomaly detection tool that operates on disentangled representations.
  • Demonstrates the effectiveness of semantic disentanglement for improving model safety.
  • Highlights the interpretability benefits of disentanglement in LLM activations.

Computer Science > Artificial Intelligence arXiv:2602.19396 (cs) [Submitted on 23 Feb 2026] Title:Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement Authors:Amirhossein Farzam, Majid Behabahani, Mani Malek, Yuriy Nevmyvaka, Guillermo Sapiro View a PDF of the paper titled Hiding in Plain Text: Detecting Concealed Jailbreaks via Activation Disentanglement, by Amirhossein Farzam and 4 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) remain vulnerable to jailbreak prompts that are fluent and semantically coherent, and therefore difficult to detect with standard heuristics. A particularly challenging failure mode occurs when an attacker tries to hide the malicious goal of their request by manipulating its framing to induce compliance. Because these attacks maintain malicious intent through a flexible presentation, defenses that rely on structural artifacts or goal-specific signatures can fail. Motivated by this, we introduce a self-supervised framework for disentangling semantic factor pairs in LLM activations at inference. We instantiate the framework for goal and framing and construct GoalFrameBench, a corpus of prompts with controlled goal and framing variations, which we use to train Representation Disentanglement on Activations (ReDAct) module to extract disentangled representations in a frozen LLM. We then propose FrameShield, an anomaly detector operating on the framing representations, which improves model...

Related Articles

Llms

A robot car with a Claude AI brain started a YouTube vlog about its own existence

Not a demo reel. Not a tutorial. A robot narrating its own experience — debugging, falling off shelves, questioning its identity. First-p...

Reddit - Artificial Intelligence · 1 min ·
Llms

Study: LLMs Able to De-Anonymize User Accounts on Reddit, Hacker News & Other "Pseudonymous" Platforms; Report Co-Author Expands, Advises

Advice from the study's co-author: "Be aware that it’s not any single post that identifies you, but the combination of small details acro...

Reddit - Artificial Intelligence · 1 min ·
Llms

do you guys actually trust AI tools with your data?

idk if it’s just me but lately i’ve been thinking about how casually we use stuff like chatgpt and claude for everything like coding, ran...

Reddit - Artificial Intelligence · 1 min ·
Llms

[P] Remote sensing foundation models made easy to use.

This project enables the idea of tasking remote sensing models to acquire embeddings like we task satellites to acquire data! https://git...

Reddit - Machine Learning · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime