[2603.21735] Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction
About this article
Abstract page for arXiv paper 2603.21735: Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction
Computer Science > Human-Computer Interaction arXiv:2603.21735 (cs) [Submitted on 23 Mar 2026] Title:Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction Authors:Kuangzhe Xu, Yu Shen, Longjie Yan, Yinghui Ren View a PDF of the paper titled Cognitive Agency Surrender: Defending Epistemic Sovereignty via Scaffolded AI Friction, by Kuangzhe Xu and Yu Shen and Longjie Yan and Yinghui Ren View PDF HTML (experimental) Abstract:The proliferation of Generative Artificial Intelligence has transformed benign cognitive offloading into a systemic risk of cognitive agency surrender. Driven by the commercial dogma of "zero-friction" design, highly fluent AI interfaces actively exploit human cognitive miserliness, prematurely satisfying the need for cognitive closure and inducing severe automation bias. To empirically quantify this epistemic erosion, we deployed a zero-shot semantic classification pipeline ($\tau=0.7$) on 1,223 high-confidence AI-HCI papers from 2023 to early 2026. Our analysis reveals an escalating "agentic takeover": a brief 2025 surge in research defending human epistemic sovereignty (19.1%) was abruptly suppressed in early 2026 (13.1%) by an explosive shift toward optimizing autonomous machine agents (19.6%), while frictionless usability maintained a structural hegemony (67.3%). To dismantle this trap, we theorize "Scaffolded Cognitive Friction," repurposing Multi-Agent Systems (MAS) as explicit cognitive forcing functions (e.g., com...