[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2604.01473: SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits

Computer Science > Cryptography and Security arXiv:2604.01473 (cs) [Submitted on 1 Apr 2026 (v1), last revised 14 Apr 2026 (this version, v2)] Title:SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits Authors:Zikai Zhang, Rui Hu, Olivera Kotevska, Jiahao Xu View a PDF of the paper titled SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits, by Zikai Zhang and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing guardrail methods typically rely on internal features or textual responses to detect malicious queries, which either introduce substantial latency or suffer from the randomness in text generation. To overcome these limitations, we propose SelfGrader, a lightweight guardrail method that formulates jailbreak detection as a numerical grading problem using token-level logits. Specifically, SelfGrader evaluates the safety of a user query within a compact set of numerical tokens (NTs) (e.g., 0-9) and interprets their logit distribution as an internal safety signal. To align these signals with human intuition of maliciousness, SelfGrader introduces a dual-perspective scoring rule that considers both the maliciousness and benignness of the query, yielding a stable and interpretable score that reflects harmfulness and reduces the false positive rate simultaneous...

Originally published on April 16, 2026. Curated by AI News.

Related Articles

[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots
Llms

[2603.23682] Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for Humans and Chatbots

Abstract page for arXiv paper 2603.23682: Assessment Design in the AI Era: A Method for Identifying Items Functioning Differentially for ...

arXiv - AI · 4 min ·
[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations
Llms

[2601.07422] Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

Abstract page for arXiv paper 2601.07422: Two Pathways to Truthfulness: On the Intrinsic Encoding of LLM Hallucinations

arXiv - AI · 4 min ·
[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images
Llms

[2603.08486] Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

Abstract page for arXiv paper 2603.08486: Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

arXiv - AI · 3 min ·
[2512.22174] BitFlipScope: Scalable Fault Localization and Recovery for Bit-Flip Corruptions in LLMs
Llms

[2512.22174] BitFlipScope: Scalable Fault Localization and Recovery for Bit-Flip Corruptions in LLMs

Abstract page for arXiv paper 2512.22174: BitFlipScope: Scalable Fault Localization and Recovery for Bit-Flip Corruptions in LLMs

arXiv - AI · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime