[2511.17714] Learning the Value of Value Learning
About this article
Abstract page for arXiv paper 2511.17714: Learning the Value of Value Learning
Computer Science > Artificial Intelligence arXiv:2511.17714 (cs) [Submitted on 21 Nov 2025 (v1), last revised 13 Apr 2026 (this version, v5)] Title:Learning the Value of Value Learning Authors:Alex John London, Aydin Mohseni View a PDF of the paper titled Learning the Value of Value Learning, by Alex John London and Aydin Mohseni View PDF HTML (experimental) Abstract:Standard decision frameworks address uncertainty about facts but assume fixed options and values. We extend the Jeffrey-Bolker framework to model refinements in values and prove a value-of-information theorem for axiological refinement. In multi-agent settings, we establish that mutual refinement will characteristically transform zero-sum games into positive-sum interactions and yield Pareto-improvements in Nash bargaining. These results show that a framework of rational choice can be extended to model value refinement. By unifying epistemic and axiological refinement under a single formalism, we broaden the conceptual foundations of rational choice and illuminate the normative status of ethical deliberation. Comments: Subjects: Artificial Intelligence (cs.AI); Computer Science and Game Theory (cs.GT) Cite as: arXiv:2511.17714 [cs.AI] (or arXiv:2511.17714v5 [cs.AI] for this version) https://doi.org/10.48550/arXiv.2511.17714 Focus to learn more arXiv-issued DOI via DataCite Submission history From: Aydin Mohseni [view email] [v1] Fri, 21 Nov 2025 19:06:30 UTC (46 KB) [v2] Mon, 1 Dec 2025 15:18:00 UTC (46 KB...