[2603.02960] Architecting Trust in Artificial Epistemic Agents
About this article
Abstract page for arXiv paper 2603.02960: Architecting Trust in Artificial Epistemic Agents
Computer Science > Artificial Intelligence arXiv:2603.02960 (cs) [Submitted on 3 Mar 2026] Title:Architecting Trust in Artificial Epistemic Agents Authors:Nahema Marchal, Stephanie Chan, Matija Franklin, Manon Revel, Geoff Keeling, Roberta Fischli, Bilva Chandra, Iason Gabriel View a PDF of the paper titled Architecting Trust in Artificial Epistemic Agents, by Nahema Marchal and 6 other authors View PDF HTML (experimental) Abstract:Large language models increasingly function as epistemic agents -- entities that can 1) autonomously pursue epistemic goals and 2) actively shape our shared knowledge environment. They curate the information we receive, often supplanting traditional search-based methods, and are frequently used to generate both personal and deeply specialized advice. How they perform these functions, including whether they are reliable and properly calibrated to both individual and collective epistemic norms, is therefore highly consequential for the choices we make. We argue that the potential impact of epistemic AI agents on practices of knowledge creation, curation and synthesis, particularly in the context of complex multi-agent interactions, creates new informational interdependencies that necessitate a fundamental shift in evaluation and governance of AI. While a well-calibrated ecosystem could augment human judgment and collective decision-making, poorly aligned agents risk causing cognitive deskilling and epistemic drift, making the calibration of these ...