[2602.22827] TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models
Summary
The paper presents TARAZ, a Persian short-answer question benchmark designed to evaluate the cultural competence of large language models (LLMs), addressing limitations of existing benchmarks.
Why It Matters
This research is significant as it introduces a culturally relevant evaluation framework for Persian language models, enhancing understanding of their performance in capturing cultural nuances. It fills a gap in existing benchmarks that primarily focus on English-centric metrics, promoting better cross-cultural AI applications.
Key Takeaways
- TARAZ offers a Persian-specific evaluation framework for LLMs.
- The benchmark improves scoring consistency by 10% over traditional methods.
- It combines morphological normalization with semantic similarity for better accuracy.
- The framework is publicly available, fostering reproducibility in research.
- It establishes a foundation for future cross-cultural evaluations of AI models.
Computer Science > Computation and Language arXiv:2602.22827 (cs) [Submitted on 26 Feb 2026] Title:TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models Authors:Reihaneh Iranmanesh, Saeedeh Davoudi, Pasha Abrishamchian, Ophir Frieder, Nazli Goharian View a PDF of the paper titled TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models, by Reihaneh Iranmanesh and 4 other authors View PDF Abstract:This paper presents a comprehensive evaluation framework for assessing the cultural competence of large language models (LLMs) in Persian. Existing Persian cultural benchmarks rely predominantly on multiple-choice formats and English-centric metrics that fail to capture Persian's morphological complexity and semantic nuance. Our framework introduces a Persian-specific short-answer evaluation that combines rule-based morphological normalization with a hybrid syntactic and semantic similarity module, enabling robust soft-match scoring beyond exact string overlap. Through systematic evaluation of 15 state-of-the-art open- and closed-source models, we demonstrate that our hybrid evaluation improves scoring consistency by +10% compared to exact-match baselines by capturing meaning that surface-level methods cannot detect. We publicly release our evaluation framework, providing the first standardized benchmark for measuring cultural understanding in Persian and establishing a reproducible foundation for cross-cultural ...