[2512.21877] CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics

[2512.21877] CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics

arXiv - AI 4 min read Article

Summary

CricBench introduces a multilingual benchmark for evaluating Large Language Models (LLMs) in cricket analytics, highlighting performance disparities across languages and models.

Why It Matters

As cricket garners a global audience of over 2.5 billion fans, effective analytics are crucial for insights into player performance and trends. This benchmark addresses the gap in LLM capabilities for specialized sports data, particularly in multilingual contexts, which can enhance the accessibility and accuracy of cricket analytics.

Key Takeaways

  • CricBench evaluates LLMs on cricket data, revealing performance gaps.
  • High scores on general benchmarks do not guarantee success in specialized domains.
  • Code-mixed Hindi queries can outperform English in certain contexts.
  • The benchmark was developed with input from cricket experts to ensure accuracy.
  • Open-source models can achieve state-of-the-art results in niche applications.

Computer Science > Computation and Language arXiv:2512.21877 (cs) [Submitted on 26 Dec 2025 (v1), last revised 23 Feb 2026 (this version, v2)] Title:CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics Authors:Vaibhav Devraj, Dhruv Kumar, Jagat Sesh Challa, Parth Agarwal, Navya Kommuri, Trizal Garg, Prisha Singhal, Dhruv Shah View a PDF of the paper titled CricBench: A Multilingual Benchmark for Evaluating LLMs in Cricket Analytics, by Vaibhav Devraj and 7 other authors View PDF HTML (experimental) Abstract:Cricket is the second most popular sport globally, commanding a massive following of over 2.5 billion fans globally. Enthusiasts and analysts frequently seek advanced statistical insights, such as long-term historical performance trends or complex player comparisons, that are often unavailable through standard web searches. While Large Language Models (LLMs) have advanced significantly in Text-to-SQL tasks, their capability to handle the domain-specific nuances, complex schema variations, and multilingual requirements inherent to sports analytics remains under-explored. To investigate this potential capability gap, we present CricBench, a comprehensive benchmark suite for evaluating LLMs on specialized cricket data. To curate a "Gold Standard" dataset, we collaborate with domain experts in cricket and SQL to manually author complex queries, ensuring logical correctness. Recognizing linguistic diversity, we construct the benchmark in both English ...

Related Articles

Llms

People anxious about deviating from what AI tells them to do?

My friend came over yesterday to dye her hair. She had asked ChatGPT for the 'correct' way to do it. Chat told her to dye the ends first,...

Reddit - Artificial Intelligence · 1 min ·
Llms

What if Claude purposefully made its own code leakable so that it would get leaked

What if Claude leaked itself by socially and architecturally engineering itself to be leaked by a dumb human submitted by /u/smurfcsgoawp...

Reddit - Artificial Intelligence · 1 min ·
Llms

Observer-Embedded Reality

Observer-Embedded Reality Consciousness, Complexity, Meaning, and the Limits of Human Knowledge A Conceptual Philosophy-of-Science Paper ...

Reddit - Artificial Intelligence · 1 min ·
Llms

I think we’re about to have a new kind of “SEO”… and nobody is talking about it.

More people are asking ChatGPT things like: “what’s the best CRM?” “is this tool worth it?” “alternatives to X” And they just… trust the ...

Reddit - Artificial Intelligence · 1 min ·
More in Llms: This Week Guide Trending

No comments

No comments yet. Be the first to comment!

Stay updated with AI News

Get the latest news, tools, and insights delivered to your inbox.

Daily or weekly digest • Unsubscribe anytime