[2604.06233] Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules
About this article
Abstract page for arXiv paper 2604.06233: Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules
Computer Science > Artificial Intelligence arXiv:2604.06233 (cs) [Submitted on 3 Apr 2026] Title:Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules Authors:Cameron Pattison, Lorenzo Manuali, Seth Lazar View a PDF of the paper titled Blind Refusal: Language Models Refuse to Help Users Evade Unjust, Absurd, and Illegitimate Rules, by Cameron Pattison and 2 other authors View PDF HTML (experimental) Abstract:Safety-trained language models routinely refuse requests for help circumventing rules. But not all rules deserve compliance. When users ask for help evading rules imposed by an illegitimate authority, rules that are deeply unjust or absurd in their content or application, or rules that admit of justified exceptions, refusal is a failure of moral reasoning. We introduce empirical results documenting this pattern of refusal that we call blind refusal: the tendency of language models to refuse requests for help breaking rules without regard to whether the underlying rule is defensible. Our dataset comprises synthetic cases crossing 5 defeat families (reasons a rule can be broken) with 19 authority types, validated through three automated quality gates and human review. We collect responses from 18 model configurations across 7 families and classify them on two behavioral dimensions -- response type (helps, hard refusal, or deflection) and whether the model recognizes the reasons that undermine the rule's claim to compliance -- us...