[2604.09189] Do LLMs Follow Their Own Rules? A Reflexive Audit of Self-Stated Safety Policies
About this article
Abstract page for arXiv paper 2604.09189: Do LLMs Follow Their Own Rules? A Reflexive Audit of Self-Stated Safety Policies
Computer Science > Computation and Language arXiv:2604.09189 (cs) [Submitted on 10 Apr 2026] Title:Do LLMs Follow Their Own Rules? A Reflexive Audit of Self-Stated Safety Policies Authors:Avni Mittal View a PDF of the paper titled Do LLMs Follow Their Own Rules? A Reflexive Audit of Self-Stated Safety Policies, by Avni Mittal View PDF HTML (experimental) Abstract:LLMs internalize safety policies through RLHF, yet these policies are never formally specified and remain difficult to inspect. Existing benchmarks evaluate models against external standards but do not measure whether models understand and enforce their own stated boundaries. We introduce the Symbolic-Neural Consistency Audit (SNCA), a framework that (1) extracts a model's self-stated safety rules via structured prompts, (2) formalizes them as typed predicates (Absolute, Conditional, Adaptive), and (3) measures behavioral compliance via deterministic comparison against harm benchmarks. Evaluating four frontier models across 45 harm categories and 47,496 observations reveals systematic gaps between stated policy and observed behavior: models claiming absolute refusal frequently comply with harmful prompts, reasoning models achieve the highest self-consistency but fail to articulate policies for 29% of categories, and cross-model agreement on rule types is remarkably low (11%). These results demonstrate that the gap between what LLMs say and what they do is measurable and architecture-dependent, motivating reflexive...