[2603.24853] Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts
About this article
Abstract page for arXiv paper 2603.24853: Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts
Computer Science > Artificial Intelligence arXiv:2603.24853 (cs) [Submitted on 25 Mar 2026] Title:Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts Authors:Silvia Rossi, Diletta Huyskes, Mackenzie Jorgensen View a PDF of the paper titled Resisting Humanization: Ethical Front-End Design Choices in AI for Sensitive Contexts, by Silvia Rossi and Diletta Huyskes and Mackenzie Jorgensen View PDF HTML (experimental) Abstract:Ethical debates in AI have primarily focused on back-end issues such as data governance, model training, and algorithmic decision-making. Less attention has been paid to the ethical significance of front-end design choices, such as the interaction and representation-based elements through which users interact with AI systems. This gap is particularly significant for Conversational User Interfaces (CUI) based on Natural Language Processing (NLP) systems, where humanizing design elements such as dialogue-based interaction, emotive language, personality modes, and anthropomorphic metaphors are increasingly prevalent. This work argues that humanization in AI front-end design is a value-driven choice that profoundly shapes users' mental models, trust calibration, and behavioral responses. Drawing on research in human-computer interaction (HCI), conversational AI, and value-sensitive design, we examine how interfaces can play a central role in misaligning user expectations, fostering misplaced trust, and subtly undermining user ...