[2512.19570] The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge

[2512.19570] The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2512.19570: The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge

Computer Science > Human-Computer Interaction arXiv:2512.19570 (cs) [Submitted on 22 Dec 2025 (v1), last revised 3 Mar 2026 (this version, v2)] Title:The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge Authors:Angjelin Hila View a PDF of the paper titled The Epistemological Consequences of Large Language Models: Rethinking collective intelligence and institutional knowledge, by Angjelin Hila View PDF HTML (experimental) Abstract:We examine epistemological threats posed by human and LLM interaction. We develop collective epistemology as a theory of epistemic warrant distributed across human collectives, using bounded rationality and dual process theory as background. We distinguish internalist justification, defined as reflective understanding of why a proposition is true, from externalist justification, defined as reliable transmission of truths. Both are necessary for collective rationality, but only internalist justification produces reflective knowledge. We specify reflective knowledge as follows: agents understand the evaluative basis of a claim, when that basis is unavailable agents consistently assess the reliability of truth sources, and agents have a duty to apply these standards within their domains of competence. We argue that LLMs approximate externalist reliabilism because they can reliably transmit information whose justificatory basis is established elsewhere, but they do not themselves pos...

Originally published on March 05, 2026. Curated by AI News.

Related Articles

[2603.29171] Segmentation of Gray Matters and White Matters from Brain MRI data
Llms

[2603.29171] Segmentation of Gray Matters and White Matters from Brain MRI data

Abstract page for arXiv paper 2603.29171: Segmentation of Gray Matters and White Matters from Brain MRI data

arXiv - Machine Learning · 4 min ·
[2602.09924] LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations
Llms

[2602.09924] LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

Abstract page for arXiv paper 2602.09924: LLMs Encode Their Failures: Predicting Success from Pre-Generation Activations

arXiv - Machine Learning · 3 min ·
[2602.01528] Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning
Llms

[2602.01528] Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning

Abstract page for arXiv paper 2602.01528: Making Bias Non-Predictive: Training Robust LLM Reasoning via Reinforcement Learning

arXiv - Machine Learning · 4 min ·
[2601.22783] Compact Hypercube Embeddings for Fast Text-based Wildlife Observation Retrieval
Llms

[2601.22783] Compact Hypercube Embeddings for Fast Text-based Wildlife Observation Retrieval

Abstract page for arXiv paper 2601.22783: Compact Hypercube Embeddings for Fast Text-based Wildlife Observation Retrieval

arXiv - Machine Learning · 4 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime