[2512.19570] The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
About this article
Abstract page for arXiv paper 2512.19570: The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge
Computer Science > Human-Computer Interaction arXiv:2512.19570 (cs) [Submitted on 22 Dec 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge Authors:Angjelin Hila View a PDF of the paper titled The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge, by Angjelin Hila View PDF HTML (experimental) Abstract:We examine epistemological threats posed by human and LLM interaction. We develop collective epistemology as a theory of epistemic warrant distributed across human collectives, using bounded rationality and dual process theory as background. We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths. Both are necessary for collective rationality, but only internalist justification produces reflective knowledge. We specify reflective knowledge as follows: agents understand the evaluative basis of a claim, when that basis is unavailable agents consistently assess the reliability of truth sources, and agents have a duty to apply these standards within their domains of competence. We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves pos...