[2603.03971] Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI
About this article
Abstract page for arXiv paper 2603.03971: Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI
Computer Science > Computers and Society arXiv:2603.03971 (cs) [Submitted on 4 Mar 2026] Title:Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI Authors:Michael Jülich View a PDF of the paper titled Upholding Epistemic Agency: A Brouwerian Assertibility Constraint for Responsible AI, by Michael J\"ulich View PDF HTML (experimental) Abstract:Generative AI can convert uncertainty into authoritative-seeming verdicts, displacing the justificatory work on which democratic epistemic agency depends. As a corrective, I propose a Brouwer-inspired assertibility constraint for responsible AI: in high-stakes domains, systems may assert or deny claims only if they can provide a publicly inspectable and contestable certificate of entitlement; otherwise they must return "Undetermined". This constraint yields a three-status interface semantics (Asserted, Denied, Undetermined) that cleanly separates internal entitlement from public standing while connecting them via the certificate as a boundary object. It also produces a time-indexed entitlement profile that is stable under numerical refinement yet revisable as the public record changes. I operationalize the constraint through decision-layer gating of threshold and argmax outputs, using internal witnesses (e.g., sound bounds or separation margins) and an output contract with reason-coded abstentions. A design lemma shows that any total, certificate-sound binary interface already decides the deployed pred...