[2602.22827] TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models

[2602.22827] TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models

arXiv - Machine Learning 3 min read Article

Summary

The paper presents TARAZ, a Persian short-answer question benchmark designed to evaluate the cultural competence of large language models (LLMs), addressing limitations of existing benchmarks.

Why It Matters

This research is significant as it introduces a culturally relevant evaluation framework for Persian language models, enhancing understanding of their performance in capturing cultural nuances. It fills a gap in existing benchmarks that primarily focus on English-centric metrics, promoting better cross-cultural AI applications.

Key Takeaways

  • TARAZ offers a Persian-specific evaluation framework for LLMs.
  • The benchmark improves scoring consistency by 10% over traditional methods.
  • It combines morphological normalization with semantic similarity for better accuracy.
  • The framework is publicly available, fostering reproducibility in research.
  • It establishes a foundation for future cross-cultural evaluations of AI models.

Computer Science > Computation and Language arXiv:2602.22827 (cs) [Submitted on 26 Feb 2026] Title:TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models Authors:Reihaneh Iranmanesh, Saeedeh Davoudi, Pasha Abrishamchian, Ophir Frieder, Nazli Goharian View a PDF of the paper titled TARAZ: Persian Short-Answer Question Benchmark for Cultural Evaluation of Language Models, by Reihaneh Iranmanesh and 4 other authors View PDF Abstract:This paper presents a comprehensive evaluation framework for assessing the cultural competence of large language models (LLMs) in Persian. Existing Persian cultural benchmarks rely predominantly on multiple-choice formats and English-centric metrics that fail to capture Persian's morphological complexity and semantic nuance. Our framework introduces a Persian-specific short-answer evaluation that combines rule-based morphological normalization with a hybrid syntactic and semantic similarity module, enabling robust soft-match scoring beyond exact string overlap. Through systematic evaluation of 15 state-of-the-art open- and closed-source models, we demonstrate that our hybrid evaluation improves scoring consistency by +10% compared to exact-match baselines by capturing meaning that surface-level methods cannot detect. We publicly release our evaluation framework, providing the first standardized benchmark for measuring cultural understanding in Persian and establishing a reproducible foundation for cross-cultural ...

Related Articles

Llms

Have Companies Began Adopting Claude Co-Work at an Enterprise Level?

Hi Guys, My company is considering purchasing the Claude Enterprise plan. The main two constraints are: - Being able to block usage of Cl...

Reddit - Artificial Intelligence · 1 min ·
Llms

What I learned about multi-agent coordination running 9 specialized Claude agents

I've been experimenting with multi-agent AI systems and ended up building something more ambitious than I originally planned: a fully ope...

Reddit - Artificial Intelligence · 1 min ·
Llms

[D] The problem with comparing AI memory system benchmarks — different evaluation methods make scores meaningless

I've been reviewing how various AI memory systems evaluate their performance and noticed a fundamental issue with cross-system comparison...

Reddit - Machine Learning · 1 min ·
Shifting to AI model customization is an architectural imperative | MIT Technology Review
Llms

Shifting to AI model customization is an architectural imperative | MIT Technology Review

In the early days of large language models (LLMs), we grew accustomed to massive 10x jumps in reasoning and coding capability with every ...

MIT Technology Review · 6 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime