[2604.01473] SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits
About this article
Abstract page for arXiv paper 2604.01473: SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits
Computer Science > Cryptography and Security arXiv:2604.01473 (cs) [Submitted on 1 Apr 2026 (v1), last revised 14 Apr 2026 (this version, v2)] Title:SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits Authors:Zikai Zhang, Rui Hu, Olivera Kotevska, Jiahao Xu View a PDF of the paper titled SelfGrader: Stable Jailbreak Detection for Large Language Models using Token-Level Logits, by Zikai Zhang and 3 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are powerful tools for answering user queries, yet they remain highly vulnerable to jailbreak attacks. Existing guardrail methods typically rely on internal features or textual responses to detect malicious queries, which either introduce substantial latency or suffer from the randomness in text generation. To overcome these limitations, we propose SelfGrader, a lightweight guardrail method that formulates jailbreak detection as a numerical grading problem using token-level logits. Specifically, SelfGrader evaluates the safety of a user query within a compact set of numerical tokens (NTs) (e.g., 0-9) and interprets their logit distribution as an internal safety signal. To align these signals with human intuition of maliciousness, SelfGrader introduces a dual-perspective scoring rule that considers both the maliciousness and benignness of the query, yielding a stable and interpretable score that reflects harmfulness and reduces the false positive rate simultaneous...