[2603.21415] Silent Commitment Failure in Instruction-Tuned Language Models: Evidence of Governability Divergence Across Architectures
About this article
Abstract page for arXiv paper 2603.21415: Silent Commitment Failure in Instruction-Tuned Language Models: Evidence of Governability Divergence Across Architectures
Computer Science > Artificial Intelligence arXiv:2603.21415 (cs) [Submitted on 22 Mar 2026] Title:Silent Commitment Failure in Instruction-Tuned Language Models: Evidence of Governability Divergence Across Architectures Authors:Gregory M. Ruddell View a PDF of the paper titled Silent Commitment Failure in Instruction-Tuned Language Models: Evidence of Governability Divergence Across Architectures, by Gregory M. Ruddell View PDF Abstract:As large language models are deployed as autonomous agents with tool execution privileges, a critical assumption underpins their security architecture: that model errors are detectable at runtime. We present empirical evidence that this assumption fails for two of three instruction-following models evaluable for conflict detection. We introduce governability -- the degree to which a model's errors are detectable before output commitment and correctable once detected -- and demonstrate it varies dramatically across models. In six models across twelve reasoning domains, two of three instruction-following models exhibited silent commitment failure: confident, fluent, incorrect output with zero warning signal. The remaining model produced a detectable conflict signal 57 tokens before commitment under greedy decoding. We show benchmark accuracy does not predict governability, correction capacity varies independently of detection, and identical governance scaffolds produce opposite effects across models. A 2x2 experiment shows a 52x difference in...