[2603.00078] Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction

[2603.00078] Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction

arXiv - AI 4 min read

About this article

Abstract page for arXiv paper 2603.00078: Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction

Computer Science > Computers and Society arXiv:2603.00078 (cs) [Submitted on 13 Feb 2026] Title:Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction Authors:Faezeh B. Pasandi, Hannah B. Pasandi View a PDF of the paper titled Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction, by Faezeh B. Pasandi and 1 other authors View PDF HTML (experimental) Abstract:The question of whether artificial entities deserve moral consideration has become one of the defining ethical challenges of AI research. Existing frameworks for moral patiency rely on verified ontological properties, such as sentience, phenomenal consciousness, or the capacity for suffering, that remain epistemically inaccessible in computational systems. This reliance creates a governance vacuum: millions of users form sustained affective bonds with conversational AI, yet no regulatory instrument distinguishes these interactions from transactional tool use. We introduce Relate (Relational Ethics for Leveled Assessment of Technological Entities), a framework that reframes AI moral patiency from ontological verification toward relational capacity and embodied interaction. Through a systematic comparison of seven governance frameworks, we demonstrate that current trustworthy AI instruments treat all human-AI encounters identically as tool use, ignoring the relational and embodied dynamics that posthumanist scholarship anticipated. We propose relati...

Originally published on March 03, 2026. Curated by AI News.

Related Articles

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization
Machine Learning

[2603.14267] DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and Synchronization

Abstract page for arXiv paper 2603.14267: DiFlowDubber: Discrete Flow Matching for Automated Video Dubbing via Cross-Modal Alignment and ...

arXiv - AI · 4 min ·
[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations
Llms

[2601.22440] AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Values from Casual Conversations

Abstract page for arXiv paper 2601.22440: AI and My Values: User Perceptions of LLMs' Ability to Extract, Embody, and Explain Human Value...

arXiv - AI · 4 min ·
[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models
Llms

[2601.13622] CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language Models

Abstract page for arXiv paper 2601.13622: CARPE: Context-Aware Image Representation Prioritization via Ensemble for Large Vision-Language...

arXiv - AI · 3 min ·
[2512.08777] Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages
Llms

[2512.08777] Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

Abstract page for arXiv paper 2512.08777: Fluent Alignment with Disfluent Judges: Post-training for Lower-resource Languages

arXiv - AI · 3 min ·
More in Ai Safety: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime