[2603.25480] Retraining as Approximate Bayesian Inference
About this article
Abstract page for arXiv paper 2603.25480: Retraining as Approximate Bayesian Inference
Computer Science > Artificial Intelligence arXiv:2603.25480 (cs) [Submitted on 26 Mar 2026] Title:Retraining as Approximate Bayesian Inference Authors:Harrison Katz View a PDF of the paper titled Retraining as Approximate Bayesian Inference, by Harrison Katz View PDF Abstract:Model retraining is usually treated as an ongoing maintenance task. But as Harrison Katz now argues, retraining can be better understood as approximate Bayesian inference under computational constraints. The gap between a continuously updated belief state and your frozen deployed model is "learning debt," and the retraining decision is a cost minimization problem with a threshold that falls out of your loss function. In this article Katz provides a decision-theoretic framework for retraining policies. The result is evidence-based triggers that replace calendar schedules and make governance auditable. For readers less familiar with the Bayesian and decision-theoretic language, key terms are defined in a glossary at the end of the article. Subjects: Artificial Intelligence (cs.AI); Statistics Theory (math.ST) Cite as: arXiv:2603.25480 [cs.AI] (or arXiv:2603.25480v1 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2603.25480 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Harrison Katz [view email] [v1] Thu, 26 Mar 2026 14:20:01 UTC (1,660 KB) Full-text links: Access Paper: View a PDF of the paper titled Retraining as Approximate Bayesian Infe...