[2604.04732] Metaphors We Compute By: A Computational Audit of Cultural Translation vs. Thinking in LLMs
About this article
Abstract page for arXiv paper 2604.04732: Metaphors We Compute By: A Computational Audit of Cultural Translation vs. Thinking in LLMs
Computer Science > Computation and Language arXiv:2604.04732 (cs) [Submitted on 6 Apr 2026] Title:Metaphors We Compute By: A Computational Audit of Cultural Translation vs. Thinking in LLMs Authors:Yuan Chang, Jiaming Qu, Zhu Li View a PDF of the paper titled Metaphors We Compute By: A Computational Audit of Cultural Translation vs. Thinking in LLMs, by Yuan Chang and 2 other authors View PDF HTML (experimental) Abstract:Large language models (LLMs) are often described as multilingual because they can understand and respond in many languages. However, speaking a language is not the same as reasoning within a culture. This distinction motivates a critical question: do LLMs truly conduct culture-aware reasoning? This paper presents a preliminary computational audit of cultural inclusivity in a creative writing task. We empirically examine whether LLMs act as culturally diverse creative partners or merely as cultural translators that leverage a dominant conceptual framework with localized expressions. Using a metaphor generation task spanning five cultural settings and several abstract concepts as a case study, we find that the model exhibits stereotyped metaphor usage for certain settings, as well as Western defaultism. These findings suggest that merely prompting an LLM with a cultural identity does not guarantee culturally grounded reasoning. Subjects: Computation and Language (cs.CL); Artificial Intelligence (cs.AI) Cite as: arXiv:2604.04732 [cs.CL] (or arXiv:2604.04732...