[2603.12277] Prompt Injection as Role Confusion
About this article
Abstract page for arXiv paper 2603.12277: Prompt Injection as Role Confusion
Computer Science > Computation and Language arXiv:2603.12277 (cs) [Submitted on 22 Feb 2026 (v1), last revised 20 Mar 2026 (this version, v2)] Title:Prompt Injection as Role Confusion Authors:Charles Ye, Jasmine Cui, Dylan Hadfield-Menell View a PDF of the paper titled Prompt Injection as Role Confusion, by Charles Ye and 2 other authors View PDF HTML (experimental) Abstract:Language models remain vulnerable to prompt injection attacks despite extensive safety training. We trace this failure to role confusion: models infer roles from how text is written, not where it comes from. We design novel role probes to capture how models internally identify "who is speaking." These reveal why prompt injection works: untrusted text that imitates a role inherits that role's authority. We test this insight by injecting spoofed reasoning into user prompts and tool outputs, achieving average success rates of 60% on StrongREJECT and 61% on agent exfiltration, across multiple open- and closed-weight models with near-zero baselines. Strikingly, the degree of internal role confusion strongly predicts attack success before generation begins. Our findings reveal a fundamental gap: security is defined at the interface but authority is assigned in latent space. More broadly, we introduce a unifying, mechanistic framework for prompt injection, demonstrating that diverse prompt-injection attacks exploit the same underlying role-confusion mechanism. Subjects: Computation and Language (cs.CL); Artif...