[2602.13452] LLM-Powered Automatic Translation and Urgency in Crisis Scenarios
Summary
This article examines the effectiveness of large language models (LLMs) in crisis communication, particularly focusing on multilingual translation and the preservation of urgency in high-stakes scenarios.
Why It Matters
As crises increasingly require rapid and accurate communication across languages, understanding the limitations of LLMs in maintaining urgency is crucial. This research highlights potential risks in deploying these technologies in critical situations, emphasizing the need for specialized evaluation frameworks.
Key Takeaways
- LLMs and translation models show performance degradation in crisis contexts.
- Translations can distort perceived urgency, impacting crisis communication.
- Urgency classification by LLMs varies significantly based on language input.
- The study introduces a new urgency-annotated dataset for over 32 languages.
- There is a pressing need for crisis-aware evaluation frameworks for language technologies.
Computer Science > Computation and Language arXiv:2602.13452 (cs) [Submitted on 13 Feb 2026] Title:LLM-Powered Automatic Translation and Urgency in Crisis Scenarios Authors:Belu Ticona, Antonis Anastasopoulos View a PDF of the paper titled LLM-Powered Automatic Translation and Urgency in Crisis Scenarios, by Belu Ticona and Antonis Anastasopoulos View PDF HTML (experimental) Abstract:Large language models (LLMs) are increasingly proposed for crisis preparedness and response, particularly for multilingual communication. However, their suitability for high-stakes crisis contexts remains insufficiently evaluated. This work examines the performance of state-of-the-art LLMs and machine translation systems in crisis-domain translation, with a focus on preserving urgency, which is a critical property for effective crisis communication and triaging. Using multilingual crisis data and a newly introduced urgency-annotated dataset covering over 32 languages, we show that both dedicated translation models and LLMs exhibit substantial performance degradation and instability. Crucially, even linguistically adequate translations can distort perceived urgency, and LLM-based urgency classifications vary widely depending on the language of the prompt and input. These findings highlight significant risks in deploying general-purpose language technologies for crisis communication and underscore the need for crisis-aware evaluation frameworks. Subjects: Computation and Language (cs.CL); Artifi...