[2602.16832] IndicJR: A Judge-Free Benchmark of Jailbreak Robustness in South Asian Languages

[2602.16832] IndicJR: A Judge-Free Benchmark of Jailbreak Robustness in South Asian Languages

arXiv - AI 3 min read Article

Summary

The paper introduces IndicJR, a benchmark for evaluating jailbreak robustness in large language models across 12 South Asian languages, revealing significant vulnerabilities in multilingual contexts.

Why It Matters

As large language models (LLMs) are predominantly tested in English, this study highlights the overlooked vulnerabilities in South Asian languages, impacting over 2 billion speakers. It emphasizes the need for inclusive safety evaluations that account for diverse linguistic contexts, particularly as users often code-switch and use romanized inputs.

Key Takeaways

  • IndicJR reveals that contract-bound evaluations inflate refusal rates but fail to prevent jailbreaks.
  • Attacks designed in English transfer effectively to Indic languages, indicating a need for multilingual testing.
  • Romanization and mixed inputs significantly affect jailbreak robustness, highlighting the importance of orthographic considerations.

Computer Science > Artificial Intelligence arXiv:2602.16832 (cs) [Submitted on 18 Feb 2026] Title:IndicJR: A Judge-Free Benchmark of Jailbreak Robustness in South Asian Languages Authors:Priyaranjan Pattnayak, Sanchari Chowdhuri View a PDF of the paper titled IndicJR: A Judge-Free Benchmark of Jailbreak Robustness in South Asian Languages, by Priyaranjan Pattnayak and 1 other authors View PDF HTML (experimental) Abstract:Safety alignment of large language models (LLMs) is mostly evaluated in English and contract-bound, leaving multilingual vulnerabilities understudied. We introduce \textbf{Indic Jailbreak Robustness (IJR)}, a judge-free benchmark for adversarial safety across 12 Indic and South Asian languages (2.1 Billion speakers), covering 45216 prompts in JSON (contract-bound) and Free (naturalistic) tracks. IJR reveals three patterns. (1) Contracts inflate refusals but do not stop jailbreaks: in JSON, LLaMA and Sarvam exceed 0.92 JSR, and in Free all models reach 1.0 with refusals collapsing. (2) English to Indic attacks transfer strongly, with format wrappers often outperforming instruction wrappers. (3) Orthography matters: romanized or mixed inputs reduce JSR under JSON, with correlations to romanization share and tokenization (approx 0.28 to 0.32) indicating systematic effects. Human audits confirm detector reliability, and lite-to-full comparisons preserve conclusions. IJR offers a reproducible multilingual stress test revealing risks hidden by English-only, cont...

Related Articles

Llms

Nvidia goes all-in on AI agents while Anthropic pulls the plug

TLDR: Nvidia is partnering with 17 major companies to build a platform specifically for enterprise AI agents, basically trying to become ...

Reddit - Artificial Intelligence · 1 min ·
Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage | TechCrunch
Llms

Anthropic says Claude Code subscribers will need to pay extra for OpenClaw usage | TechCrunch

It’s about to become more expensive for Claude Code subscribers to use Anthropic’s coding assistant with OpenClaw and other third-party t...

TechCrunch - AI · 4 min ·
Llms

I am seeing Claude everywhere

Every single Instagram reel or TikTok I scroll i see people mentioning Claude and glazing it like it’s some kind of master tool that’s be...

Reddit - Artificial Intelligence · 1 min ·
Llms

Claude Opus 4.6 API at 40% below Anthropic pricing – try free before you pay anything

Hey everyone I've set up a self-hosted API gateway using [New-API](QuantumNous/new-ap) to manage and distribute Claude Opus 4.6 access ac...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime