[2603.21735] Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction

[2603.21735] Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.21735: Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction

Computer Science > Human-Computer Interaction arXiv:2603.21735 (cs) [Submitted on 23 Mar 2026] Title:Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction Authors:Kuangzhe Xu, Yu Shen, Longjie Yan, Yinghui Ren View a PDF of the paper titled Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction, by Kuangzhe Xu and Yu Shen and Longjie Yan and Yinghui Ren View PDF HTML (experimental) Abstract:The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($\tau=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., com...

Originally published on March 24, 2026. Curated by AI News.

Related Articles

Ai Safety

Bias in AI: Examples and 6 Ways to Fix it in 2026

AI bias is an anomaly in the output of ML algorithms due to prejudiced assumptions. Explore types of AI bias, examples, how to reduce bia...

AI Events · 36 min ·
Llms

[R] I built a benchmark that catches LLMs breaking physics laws

I got tired of LLMs confidently giving wrong physics answers, so I built a benchmark that generates adversarial physics questions and gra...

Reddit - Machine Learning · 1 min ·
Machine Learning

We need to teach AI the essence of being human to reduce the risk of misalignment

One part of the alignment problem is that AI does not genuinely understand what it's like to live in the world, even though it can descri...

Reddit - Artificial Intelligence · 1 min ·
California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist
Ai Safety

California’s New AI Regulations Take Effect Oct. 1: Here’s Your Compliance Checklist

California's new regulations on automated decision systems take effect on October 1, affecting all employers and requiring compliance wit...

AI Events · 4 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime