[2603.00070] Certainty-Validity: A Diagnostic Framework for Discrete Commitment Systems
About this article
Abstract page for arXiv paper 2603.00070: Certainty-Validity: A Diagnostic Framework for Discrete Commitment Systems
Computer Science > Machine Learning arXiv:2603.00070 (cs) [Submitted on 10 Feb 2026] Title:Certainty-Validity: A Diagnostic Framework for Discrete Commitment Systems Authors:Datorien L. Anderson View a PDF of the paper titled Certainty-Validity: A Diagnostic Framework for Discrete Commitment Systems, by Datorien L. Anderson View PDF HTML (experimental) Abstract:Standard evaluation metrics for machine learning -- accuracy, precision, recall, and AUROC -- assume that all errors are equivalent: a confident incorrect prediction is penalized identically to an uncertain one. For discrete commitment systems (architectures that select committed states {-W, 0, +W}), this assumption is epistemologically flawed. We introduce the Certainty-Validity (CVS) Framework, a diagnostic method that decomposes model performance into a 2x2 matrix distinguishing high/low certainty from valid/invalid predictions. This framework reveals a critical failure mode hidden by standard accuracy: Confident-Incorrect (CI) behavior, where models hallucinate structure in ambiguous data. Through ablation experiments on Fashion-MNIST, EMNIST, and IMDB, we analyze the "83% Ambiguity Ceiling" -- a stopping point where this specific discrete architecture consistently plateaus on noisy benchmarks. Unlike continuous models that can surpass this ceiling by memorizing texture or statistical noise, the discrete model refuses to commit to ambiguous samples. We show that this refusal is not a failure but a feature: the m...