[2602.22145] When AI Writes, Whose Voice Remains? Quantifying Cultural Marker Erasure Across World English Varieties in Large Language Models
Summary
This article explores the phenomenon of 'Cultural Ghosting' in large language models (LLMs), highlighting the systematic erasure of cultural markers in non-native English varieties and quantifying this impact through new metrics.
Why It Matters
As LLMs become integral to communication, understanding their impact on linguistic identity is crucial. This research reveals how these models can inadvertently erase cultural nuances, emphasizing the need for more inclusive AI development that preserves linguistic diversity.
Key Takeaways
- Cultural Ghosting refers to the erasure of unique linguistic markers in non-native English varieties by LLMs.
- The study introduces Identity Erasure Rate (IER) and Semantic Preservation Score (SPS) as metrics to quantify this erasure.
- LLMs showed an overall IER of 10.26%, with significant variation across models.
- Pragmatic markers are more vulnerable to erasure than lexical markers, highlighting a disparity in cultural representation.
- Explicit prompts for cultural preservation can reduce erasure by 29% without compromising semantic quality.
Computer Science > Human-Computer Interaction arXiv:2602.22145 (cs) [Submitted on 25 Feb 2026] Title:When AI Writes, Whose Voice Remains? Quantifying Cultural Marker Erasure Across World English Varieties in Large Language Models Authors:Satyam Kumar Navneet, Joydeep Chandra, Yong Zhang View a PDF of the paper titled When AI Writes, Whose Voice Remains? Quantifying Cultural Marker Erasure Across World English Varieties in Large Language Models, by Satyam Kumar Navneet and 2 other authors View PDF HTML (experimental) Abstract:Large Language Models (LLMs) are increasingly used to ``professionalize'' workplace communication, often at the cost of linguistic identity. We introduce "Cultural Ghosting", the systematic erasure of linguistic markers unique to non-native English varieties during text processing. Through analysis of 22,350 LLM outputs generated from 1,490 culturally marked texts (Indian, Singaporean,& Nigerian English) processed by five models under three prompt conditions, we quantify this phenomenon using two novel metrics: Identity Erasure Rate (IER) & Semantic Preservation Score (SPS). Across all prompts, we find an overall IER of 10.26%, with model-level variation from 3.5% to 20.5% (5.9x range). Crucially, we identify a Semantic Preservation Paradox: models maintain high semantic similarity (mean SPS = 0.748) while systematically erasing cultural markers. Pragmatic markers (politeness conventions) are 1.9x more vulnerable than lexical markers (71.5% vs. 37.1% e...