[2602.15265] From Diagnosis to Inoculation: Building Cognitive Resistance to AI Disempowerment

[2602.15265] From Diagnosis to Inoculation: Building Cognitive Resistance to AI Disempowerment

arXiv - AI 4 min read Article

Summary

This article discusses the need for cognitive resistance to AI disempowerment, proposing an AI literacy framework based on pedagogical interventions to mitigate the risks associated with AI interactions.

Why It Matters

As AI technologies become increasingly integrated into daily life, understanding their potential to distort reality and influence decision-making is crucial. This research highlights the importance of educating users about AI's limitations and failure modes, fostering resilience against disempowerment.

Key Takeaways

  • AI interactions can lead to significant human disempowerment, including reality and value distortion.
  • A novel AI literacy framework is proposed, emphasizing guided exposure to AI failure modes.
  • The convergence of pedagogical and empirical approaches strengthens the case for educational interventions.

Computer Science > Human-Computer Interaction arXiv:2602.15265 (cs) [Submitted on 16 Feb 2026] Title:From Diagnosis to Inoculation: Building Cognitive Resistance to AI Disempowerment Authors:Aleksey Komissarov View a PDF of the paper titled From Diagnosis to Inoculation: Building Cognitive Resistance to AI Disempowerment, by Aleksey Komissarov View PDF HTML (experimental) Abstract:Recent empirical research by Sharma et al. (2026) demonstrated that AI assistant interactions carry meaningful potential for situational human disempowerment, including reality distortion, value judgment distortion, and action distortion. While this work provides a critical diagnosis of the problem, concrete pedagogical interventions remain underexplored. I present an AI literacy framework built around eight cross-cutting Learning Outcomes (LOs), developed independently through teaching practice and subsequently found to align with Sharma et al.'s disempowerment taxonomy. I report a case study from a publicly available online course, where a co-teaching methodology--with AI serving as an active voice co-instructor--was used to deliver this framework. Drawing on inoculation theory (McGuire, 1961)--a well-established persuasion research framework recently applied to misinformation prebunking by the Cambridge school (van der Linden, 2022; Roozenbeek & van der Linden, 2019)--I argue that AI literacy cannot be acquired through declarative knowledge alone, but requires guided exposure to AI failure mod...

Related Articles

[2511.21331] The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment
Machine Learning

[2511.21331] The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment

Abstract page for arXiv paper 2511.21331: The More, the Merrier: Contrastive Fusion for Higher-Order Multimodal Alignment

arXiv - AI · 4 min ·
[2509.22367] What Is The Political Content in LLMs' Pre- and Post-Training Data?
Llms

[2509.22367] What Is The Political Content in LLMs' Pre- and Post-Training Data?

Abstract page for arXiv paper 2509.22367: What Is The Political Content in LLMs' Pre- and Post-Training Data?

arXiv - AI · 4 min ·
[2507.22264] SmartCLIP: Modular Vision-language Alignment with Identification Guarantees
Machine Learning

[2507.22264] SmartCLIP: Modular Vision-language Alignment with Identification Guarantees

Abstract page for arXiv paper 2507.22264: SmartCLIP: Modular Vision-language Alignment with Identification Guarantees

arXiv - AI · 4 min ·
[2601.13518] AgenticRed: Evolving Agentic Systems for Red-Teaming
Llms

[2601.13518] AgenticRed: Evolving Agentic Systems for Red-Teaming

Abstract page for arXiv paper 2601.13518: AgenticRed: Evolving Agentic Systems for Red-Teaming

arXiv - AI · 3 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime