[2603.00078] Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction
About this article
Abstract page for arXiv paper 2603.00078: Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction
Computer Science > Computers and Society arXiv:2603.00078 (cs) [Submitted on 13 Feb 2026] Title:Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction Authors:Faezeh B. Pasandi, Hannah B. Pasandi View a PDF of the paper titled Alignment Is Not Enough: A Relational Framework for Moral Standing in Human-AI Interaction, by Faezeh B. Pasandi and 1 other authors View PDF HTML (experimental) Abstract:The question of whether artificial entities deserve moral consideration has become one of the defining ethical challenges of AI research. Existing frameworks for moral patiency rely on verified ontological properties, such as sentience, phenomenal consciousness, or the capacity for suffering, that remain epistemically inaccessible in computational systems. This reliance creates a governance vacuum: millions of users form sustained affective bonds with conversational AI, yet no regulatory instrument distinguishes these interactions from transactional tool use. We introduce Relate (Relational Ethics for Leveled Assessment of Technological Entities), a framework that reframes AI moral patiency from ontological verification toward relational capacity and embodied interaction. Through a systematic comparison of seven governance frameworks, we demonstrate that current trustworthy AI instruments treat all human-AI encounters identically as tool use, ignoring the relational and embodied dynamics that posthumanist scholarship anticipated. We propose relati...